SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Maquiling Virmarie) "

Sökning: WFRF:(Maquiling Virmarie)

  • Resultat 1-2 av 2
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Byrne, Sean Anthony, et al. (författare)
  • Precise localization of corneal reflections in eye images using deep learning trained on synthetic data
  • Ingår i: Behavior Research Methods. - 1554-3528.
  • Tidskriftsartikel (refereegranskat)abstract
    • We present a deep learning method for accurately localizing the center of a single corneal reflection (CR) in an eye image. Unlike previous approaches, we use a convolutional neural network (CNN) that was trained solely using synthetic data. Using only synthetic data has the benefit of completely sidestepping the time-consuming process of manual annotation that is required for supervised training on real eye images. To systematically evaluate the accuracy of our method, we first tested it on images with synthetic CRs placed on different backgrounds and embedded in varying levels of noise. Second, we tested the method on two datasets consisting of high-quality videos captured from real eyes. Our method outperformed state-of-the-art algorithmic methods on real eye images with a 3-41.5% reduction in terms of spatial precision across data sets, and performed on par with state-of-the-art on synthetic images in terms of spatial accuracy. We conclude that our method provides a precise method for CR center localization and provides a solution to the data availability problem, which is one of the important common roadblocks in the development of deep learning models for gaze estimation. Due to the superior CR center localization and ease of application, our method has the potential to improve the accuracy and precision of CR-based eye trackers.
  •  
2.
  • Maquiling, Virmarie, et al. (författare)
  • V-ir-Net : A Novel Neural Network for Pupil and Corneal Reflection Detection trained on Simulated Light Distributions
  • 2023
  • Ingår i: MobileHCI '23 Companion : Proceedings of the 25th International Conference on Mobile Human-Computer Interaction - Proceedings of the 25th International Conference on Mobile Human-Computer Interaction. - 9781450399241 ; , s. 1-7
  • Konferensbidrag (refereegranskat)abstract
    • Deep learning has shown promise for gaze estimation in Virtual Reality (VR) and other head-mounted applications, but such models are hard to train due to lack of available data. Here we introduce a novel method to train neural networks for gaze estimation using synthetic images that model the light distributions captured in a P-CR setup. We tested our model on a dataset of real eye images from a VR setup, achieving 76% accuracy which is close to the state-of-the-art model which was trained on the dataset itself. The localization error for CRs was 1.56 pixels and 2.02 pixels for the pupil, which is on par with state-of-the-art. Our approach allowed inference on the whole dataset without sacrificing data for model training. Our method provides a cost-efficient and lightweight training alternative, eliminating the need for hand-labeled data. It offers flexible customization, e.g. adapting to different illuminator configurations, with minimal code changes.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-2 av 2

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy