1. |
- Fraile, Marc, PhD Candidate, 1989-, et al.
(författare)
-
End-to-End Learning and Analysis of Infant Engagement During Guided Play : Prediction and Explainability
- 2022
-
Ingår i: ICMI '22. - New York, NY, USA : Association for Computing Machinery (ACM). - 9781450393904 ; , s. 444-454
-
Konferensbidrag (refereegranskat)abstract
- Infant engagement during guided play is a reliable indicator of early learning outcomes, psychiatric issues and familial wellbeing. An obstacle to using such information in real-world scenarios is the need for a domain expert to assess the data. We show that an end-to-end Deep Learning approach can perform well in automatic infant engagement detection from a single video source, without requiring a clear view of the face or the whole body. To tackle the problem of explainability in learning methods, we evaluate how four common attention mapping techniques can be used to perform subjective evaluation of the network’s decision process and identify multimodal cues used by the network to discriminate engagement levels. We further propose a quantitative comparison approach, by collecting a human attention baseline and evaluating its similarity to each technique.
|
|
2. |
- Zhong, Mengyu, et al.
(författare)
-
A case study in designing trustworthy interactions : implications for socially assistive robotics
- 2023
-
Ingår i: Frontiers in Computer Science. - : Frontiers Media S.A.. - 2624-9898. ; 5
-
Tidskriftsartikel (refereegranskat)abstract
- This work is a case study in applying recent, high-level ethical guidelines, specifically concerning transparency and anthropomorphisation, to Human-Robot Interaction (HRI) design practice for a real-world Socially Assistive Robot (SAR) application. We utilize an online study to investigate how the perception and efficacy of SARs might be influenced by this design practice, examining how robot utterances and display manipulations influence perceptions of the robot and the medical recommendations it gives. Our results suggest that applying transparency policies can improve the SAR's effectiveness without harming its perceived anthropomorphism. However, our objective measures suggest participant understanding of the robot's decision-making process remained low across conditions. Furthermore, verbal anthropomorphisation does not seem to affect the perception or efficacy of the robot.
|
|