SwePub
Sök i LIBRIS databas

  Utökad sökning

onr:"swepub:oai:DiVA.org:his-17301"
 

Sökning: onr:"swepub:oai:DiVA.org:his-17301" > What Can You See? :

What Can You See? : Identifying Cues on Internal States From the Movements of Natural Social Interactions

Bartlett, Madeleine (författare)
Centre for Robotics and Neural Systems (CRNS), University of Plymouth, Plymouth, United Kingdom
Edmunds, Charlotte E.R. (författare)
Warwick Business School, University of Warwick, Coventry, United Kingdom
Belpaeme, Tony (författare)
Centre for Robotics and Neural Systems (CRNS), University of Plymouth, Plymouth, United Kingdom / ID Lab—imec, University of Ghent, Belgium
visa fler...
Thill, Serge (författare)
Högskolan i Skövde,Institutionen för informationsteknologi,Forskningscentrum för Informationsteknologi,Donders Institute for Brain, Cognition, and Behavior, Radboud University, Nijmegen, Netherlands,Interaction Lab (ILAB)
Lemaignan, Séverin (författare)
Bristol Robotics Lab, University of the West of England, Bristol, United Kingdom
visa färre...
 (creator_code:org_t)
2019-06-26
2019
Engelska.
Ingår i: Frontiers in Robotics and AI. - : Frontiers Research Foundation. - 2296-9144. ; 6:49
  • Tidskriftsartikel (refereegranskat)
Abstract Ämnesord
Stäng  
  • In recent years, the field of Human-Robot Interaction (HRI) has seen an increasingdemand for technologies that can recognize and adapt to human behaviors and internalstates (e.g., emotions and intentions). Psychological research suggests that humanmovements are important for inferring internal states. There is, however, a need to betterunderstand what kind of information can be extracted from movement data, particularlyin unconstrained, natural interactions. The present study examines which internal statesand social constructs humans identify from movement in naturalistic social interactions.Participants either viewed clips of the full scene or processed versions of it displaying2D positional data. Then, they were asked to fill out questionnaires assessing their socialperception of the viewed material. We analyzed whether the full scene clips were moreinformative than the 2D positional data clips. First, we calculated the inter-rater agreementbetween participants in both conditions. Then, we employed machine learning classifiersto predict the internal states of the individuals in the videos based on the ratingsobtained. Although we found a higher inter-rater agreement for full scenes comparedto positional data, the level of agreement in the latter case was still above chance,thus demonstrating that the internal states and social constructs under study wereidentifiable in both conditions. A factor analysis run on participants’ responses showedthat participants identified the constructs interaction imbalance, interaction valence andengagement regardless of video condition. The machine learning classifiers achieveda similar performance in both conditions, again supporting the idea that movementalone carries relevant information. Overall, our results suggest it is reasonable to expecta machine learning algorithm, and consequently a robot, to successfully decode andclassify a range of internal states and social constructs using low-dimensional data (suchas the movements and poses of observed individuals) as input.

Ämnesord

NATURVETENSKAP  -- Data- och informationsvetenskap -- Människa-datorinteraktion (hsv//swe)
NATURAL SCIENCES  -- Computer and Information Sciences -- Human Computer Interaction (hsv//eng)

Nyckelord

social psychology
human-robot interaction
machine learning
social interaction
recognition
Interaction Lab (ILAB)
Interaction Lab (ILAB)

Publikations- och innehållstyp

ref (ämneskategori)
art (ämneskategori)

Hitta via bibliotek

Till lärosätets databas

Sök utanför SwePub

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy