SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "L773:9781450393904 "

Sökning: L773:9781450393904

  • Resultat 1-3 av 3
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Fraile, Marc, PhD Candidate, 1989-, et al. (författare)
  • End-to-End Learning and Analysis of Infant Engagement During Guided Play : Prediction and Explainability
  • 2022
  • Ingår i: ICMI '22. - New York, NY, USA : Association for Computing Machinery (ACM). - 9781450393904 ; , s. 444-454
  • Konferensbidrag (refereegranskat)abstract
    • Infant engagement during guided play is a reliable indicator of early learning outcomes, psychiatric issues and familial wellbeing. An obstacle to using such information in real-world scenarios is the need for a domain expert to assess the data. We show that an end-to-end Deep Learning approach can perform well in automatic infant engagement detection from a single video source, without requiring a clear view of the face or the whole body. To tackle the problem of explainability in learning methods, we evaluate how four common attention mapping techniques can be used to perform subjective evaluation of the network’s decision process and identify multimodal cues used by the network to discriminate engagement levels. We further propose a quantitative comparison approach, by collecting a human attention baseline and evaluating its similarity to each technique.
  •  
2.
  • Yoon, Youngwoo, et al. (författare)
  • The GENEA Challenge 2022 : A large evaluation of data-driven co-speech gesture generation
  • 2022
  • Ingår i: ICMI 2022. - New York, NY, USA : Association for Computing Machinery (ACM). - 9781450393904 ; , s. 736-747
  • Konferensbidrag (refereegranskat)abstract
    • This paper reports on the second GENEA Challenge to benchmark data-driven automatic co-speech gesture generation. Participating teams used the same speech and motion dataset to build gesture-generation systems. Motion generated by all these systems was rendered to video using a standardised visualisation pipeline and evaluated in several large, crowdsourced user studies. Unlike when comparing different research papers, differences in results are here only due to differences between methods, enabling direct comparison between systems. This year's dataset was based on 18 hours of full-body motion capture, including fingers, of different persons engaging in dyadic conversation. Ten teams participated in the challenge across two tiers: full-body and upper-body gesticulation. For each tier we evaluated both the human-likeness of the gesture motion and its appropriateness for the specific speech signal. Our evaluations decouple human-likeness from gesture appropriateness, which previously was a major challenge in the field. The evaluation results are a revolution, and a revelation. Some synthetic conditions are rated as significantly more human-like than human motion capture. To the best of our knowledge, this has never been shown before on a high-fidelity avatar. On the other hand, all synthetic motion is found to be vastly less appropriate for the speech than the original motion-capture recordings.
  •  
3.
  • Zhong, Mengyu, et al. (författare)
  • Unimodal vs. Multimodal Prediction of Antenatal Depression from Smartphone-based Survey Data in a Longitudinal Study
  • 2022
  • Ingår i: ICMI '22. - New York, NY, USA : Association for Computing Machinery (ACM). - 9781450393904 ; , s. 455-467
  • Konferensbidrag (refereegranskat)abstract
    • Antenatal depression impacts 7-20% of women globally, and can have serious consequences for both the mother and the infant. Preventative interventions are effective, but are cost-efficient only among those at high risk. As such, being able to predict and identify those at risk is invaluable for reducing the burden of care and adverse consequences, as well as improving treatment outcomes. While several approaches have been proposed in the literature for the automatic prediction of depressive states, there is a scarcity of research on automatic prediction of perinatal depression. Moreover, while there exist some works on the automatic prediction of postpartum depression using data collected in clinical settings and applied the model to a smartphone application, to the best of our knowledge, no previous work has investigated the automatic prediction of late antenatal depression using data collected via a smartphone app in the first and second trimesters of pregnancy. This study utilizes data measuring various aspects of self-reported psychological, physiological and behavioral information, collected from 915 women in the first and second trimesters of pregnancy using a smartphone app designed for perinatal depression. By applying machine learning algorithms on these data, this paper explores the possibility of automatic early detection of antenatal depression (i.e., during week 36 to week 42 of pregnancy) in everyday life without the administration of healthcare professionals. We compare uni-modal and multi-modal models and identify predictive markers related to antenatal depression. With multi-modal approach the model reaches a BAC of 0.75, and an AUC of 0.82.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-3 av 3

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy