SwePub
Sök i SwePub databas

  Extended search

Träfflista för sökning "WFRF:(Billing Erik A. 1981 ) "

Search: WFRF:(Billing Erik A. 1981 )

  • Result 1-2 of 2
Sort/group result
   
EnumerationReferenceCoverFind
1.
  • Alenljung, Beatrice, et al. (author)
  • User Experience of Conveying Emotions by Touch
  • 2017
  • In: Proceedings of the 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN). - : IEEE. - 9781538635179 - 9781538635193 - 9781538635186 ; , s. 1240-1247
  • Conference paper (peer-reviewed)abstract
    • In the present study, 64 users were asked to convey eight distinct emotion to a humanoid Nao robot via touch, and were then asked to evaluate their experiences of performing that task. Large differences between emotions were revealed. Users perceived conveying of positive/pro-social emotions as significantly easier than negative emotions, with love and disgust as the two extremes. When asked whether they would act differently towards a human, compared to the robot, the users’ replies varied. A content analysis of interviews revealed a generally positive user experience (UX) while interacting with the robot, but users also found the task challenging in several ways. Three major themes with impact on the UX emerged; responsiveness, robustness, and trickiness. The results are discussed in relation to a study of human-human affective tactile interaction, with implications for human-robot interaction (HRI) and design of social and affective robotics in particular. 
  •  
2.
  • Redyuk, Sergey, et al. (author)
  • Challenges in face expression recognition from video
  • 2017
  • In: SweDS 2017.
  • Conference paper (peer-reviewed)abstract
    • Identication of emotion from face expressions is a relatively well understood problem where state-of-the-art solutions perform almost as well as humans. However, in many practical applications, disruptingfactors still make identication of face expression a very challenging problem. Within the project DREAM1- Development of Robot Enhanced Therapy for Children with Autism Spectrum Disorder (ASD), we areidentifying face expressions from children with ASD, during therapy. Identied face expressions are usedboth in the online system, to guide the behavior of the robot, and o-line, to automatically annotate videofor measurements of clinical outcomes.This setup puts several new challenges on the face expression technology. First of all, in contrast tomost open databases of face expressions comprising adult faces, we are recognizing emotions from childrenbetween the age of 4 to 7 years. Secondly, children with ASD may show emotions dierently, compared totypically developed children. Thirdly, the children move freely during the intervention and, despite the useof several cameras tracking the face of the child from dierent angles, we rarely have a full frontal view ofthe face. Fourthly, and nally, the amount of native data is very limited.Although we have access to extensive video recorded material from therapy sessions with ASD children,potentially constituting a very valuable dataset for both training and testing of face expression implemen-tations, this data proved to be dicult to use. A session of 10 minutes of video may comprise only a fewinstances of expressions e.g. smiling. As such, although we have many hours of video in total, the data isvery sparse and the number of clear face expressions is still rather small for it to be used as training data inmost machine learning (ML) techniques.We therefore focused on the use of synthetic datasets for transfer learning, trying to overcome thechallenges mentioned above. Three techniques were evaluated: (1) convolutional neural networks for imageclassication by analyzing separate video frames, (2) recurrent neural networks for sequence classication tocapture facial dynamics, and (3) ML algorithms classifying pre-extracted facial landmarks.The performance of all three models are unsatisfactory. Although the proposed models were of highaccuracy, approximately 98%, while classifying a test set, they performed poorly on the real-world data.This was due to the usage of a synthetic dataset which had mostly a frontal view of faces. The models whichhave not seen similar examples before failed to classify them correctly. The accuracy decreased drasticallywhen the child rotated her head or covered a part of her face. Even if the frame clearly captured a facialexpression, ML algorithms were not able to provide a stable positive classication rate. Thus, elaborationon training datasets and designing robust ML models are required. Another option is to incorporate voiceand gestures of the child into the model to classify emotional state as a complex concept.
  •  
Skapa referenser, mejla, bekava och länka
  • Result 1-2 of 2
Type of publication
conference paper (2)
Type of content
peer-reviewed (2)
Author/Editor
Billing, Erik A., 19 ... (2)
Alenljung, Beatrice (1)
Andreasson, Rebecca (1)
Lowe, Robert, 1975- (1)
Lindblom, Jessica (1)
Redyuk, Sergey (1)
University
University of Skövde (2)
Uppsala University (1)
Language
English (2)
Research subject (UKÄ/SCB)
Natural sciences (1)
Engineering and Technology (1)
Year

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Close

Copy and save the link in order to return to this view