SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Abelho Pereira André Tiago) "

Sökning: WFRF:(Abelho Pereira André Tiago)

  • Resultat 1-10 av 19
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Abelho Pereira, André Tiago, et al. (författare)
  • Effects of Different Interaction Contexts when Evaluating Gaze Models in HRI
  • 2020
  • Konferensbidrag (refereegranskat)abstract
    • We previously introduced a responsive joint attention system that uses multimodal information from users engaged in a spatial reasoning task with a robot and communicates joint attention via the robot's gaze behavior [25]. An initial evaluation of our system with adults showed it to improve users' perceptions of the robot's social presence. To investigate the repeatability of our prior findings across settings and populations, here we conducted two further studies employing the same gaze system with the same robot and task but in different contexts: evaluation of the system with external observers and evaluation with children. The external observer study suggests that third-person perspectives over videos of gaze manipulations can be used either as a manipulation check before committing to costly real-time experiments or to further establish previous findings. However, the replication of our original adults study with children in school did not confirm the effectiveness of our gaze manipulation, suggesting that different interaction contexts can affect the generalizability of results in human-robot interaction gaze studies.
  •  
2.
  • Abelho Pereira, André Tiago, et al. (författare)
  • Responsive Joint Attention in Human-Robot Interaction
  • 2019
  • Ingår i: Proceedings 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2019. - : Institute of Electrical and Electronics Engineers (IEEE). ; , s. 1080-1087
  • Konferensbidrag (refereegranskat)abstract
    • Joint attention has been shown to be not only crucial for human-human interaction but also human-robot interaction. Joint attention can help to make cooperation more efficient, support disambiguation in instances of uncertainty and make interactions appear more natural and familiar. In this paper, we present an autonomous gaze system that uses multimodal perception capabilities to model responsive joint attention mechanisms. We investigate the effects of our system on people’s perception of a robot within a problem-solving task. Results from a user study suggest that responsive joint attention mechanisms evoke higher perceived feelings of social presence on scales that regard the direction of the robot’s perception.
  •  
3.
  • Ardal, Dui, et al. (författare)
  • A Collaborative Previsualization Tool for Filmmaking in Virtual Reality
  • 2019
  • Ingår i: Proceedings - CVMP 2019: 16th ACM SIGGRAPH European Conference on Visual Media Production. - New York, NY, USA : ACM Digital Library. - 9781450370035
  • Konferensbidrag (refereegranskat)abstract
    • Previsualization is a process within pre-production of filmmaking where filmmakers can visually plan specific scenes with camera works, lighting, character movements, etc. The costs of computer graphics-based effects are substantial within film production. Using previsualization, these scenes can be planned in detail to reduce the amount of work put on effects in the later production phase. We develop and assess a prototype for previsualization in virtual reality for collaborative purposes where multiple filmmakers can be present in a virtual environment to share a creative work experience, remotely. By performing a within-group study on 20 filmmakers, our findings show that the use of virtual reality for distributed, collaborative previsualization processes is useful for real-life pre-production purposes.
  •  
4.
  • Gillet, Sarah, et al. (författare)
  • Robot Gaze Can Mediate Participation Imbalance in Groups with Different Skill Levels
  • 2021
  • Ingår i: Proceedings of the 2021 ACM/IEEE International Conference on Human-Robot Interaction. - New York, NY, USA : Association for Computing Machinery. ; , s. 303-311
  • Konferensbidrag (refereegranskat)abstract
    • Many small group activities, like working teams or study groups, have a high dependency on the skill of each group member. Differences in skill level among participants can affect not only the performance of a team but also influence the social interaction of its members. In these circumstances, an active member could balance individual participation without exerting direct pressure on specific members by using indirect means of communication, such as gaze behaviors. Similarly, in this study, we evaluate whether a social robot can balance the level of participation in a language skill-dependent game, played by a native speaker and a second language learner. In a between-subjects study (N = 72), we compared an adaptive robot gaze behavior, that was targeted to increase the level of contribution of the least active player, with a non-adaptive gaze behavior. Our results imply that, while overall levels of speech participation were influenced predominantly by personal traits of the participants, the robot’s adaptive gaze behavior could shape the interaction among participants which lead to more even participation during the game.
  •  
5.
  • He, Yuan, et al. (författare)
  • Evaluating data-driven co-speech gestures of embodied conversational agents through real-time interaction
  • 2022
  • Ingår i: IVA '22: Proceedings of the 22nd ACM International Conference on Intelligent Virtual Agents. - New York, NY, USA : Association for Computing Machinery (ACM).
  • Konferensbidrag (refereegranskat)abstract
    • Embodied Conversational Agents (ECAs) that make use of co-speech gestures can enhance human-machine interactions in many ways. In recent years, data-driven gesture generation approaches for ECAs have attracted considerable research attention, and related methods have continuously improved. Real-time interaction is typically used when researchers evaluate ECA systems that generate rule-based gestures. However, when evaluating the performance of ECAs based on data-driven methods, participants are often required only to watch pre-recorded videos, which cannot provide adequate information about what a person perceives during the interaction. To address this limitation, we explored use of real-time interaction to assess data-driven gesturing ECAs. We provided a testbed framework, and investigated whether gestures could affect human perception of ECAs in the dimensions of human-likeness, animacy, perceived intelligence, and focused attention. Our user study required participants to interact with two ECAs - one with and one without hand gestures. We collected subjective data from the participants' self-report questionnaires and objective data from a gaze tracker. To our knowledge, the current study represents the first attempt to evaluate data-driven gesturing ECAs through real-time interaction and the first experiment using gaze-tracking to examine the effect of ECAs' gestures.
  •  
6.
  • Kammerlander, Robin K., et al. (författare)
  • Using Virtual Reality to Support Acting in Motion Capture with Differently Scaled Characters
  • 2021
  • Ingår i: 2021 IEEE VIRTUAL REALITY AND 3D USER INTERFACES (VR). - : Institute of Electrical and Electronics Engineers (IEEE). ; , s. 402-410
  • Konferensbidrag (refereegranskat)abstract
    • Motion capture is a well-established technology for capturing actors' movements and performances within the entertainment industry. Many actors, however, witness the poor acting conditions associated with such recordings. Instead of detailed sets, costumes and props, they are forced to play in empty spaces wearing tight suits. Often, their co-actors will be imaginary, replaced by placeholder props, or they would be out of scale with their virtual counterparts. These problems do not only affect acting, they also cause an abundance of laborious post-processing clean-up work. To solve these challenges, we propose using a combination of virtual reality and motion capture technology to bring differently proportioned virtual characters into a shared collaborative virtual environment. A within-subjects user study with trained actors showed that our proposed platform enhances their feelings of body ownership and immersion. This in turn changed actors' performances which narrowed the gap between virtual performances and final intended animations.
  •  
7.
  • Kontogiorgos, Dimosthenis, 1987-, et al. (författare)
  • Behavioural Responses to Robot Conversational Failures
  • 2020
  • Ingår i: HRI '20: Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction. - New York, NY, USA : ACM Digital Library.
  • Konferensbidrag (refereegranskat)abstract
    • Humans and robots will increasingly collaborate in domestic environments which will cause users to encounter more failures in interactions. Robots should be able to infer conversational failures by detecting human users’ behavioural and social signals. In this paper, we study and analyse these behavioural cues in response to robot conversational failures. Using a guided task corpus, where robot embodiment and time pressure are manipulated, we ask human annotators to estimate whether user affective states differ during various types of robot failures. We also train a random forest classifier to detect whether a robot failure has occurred and compare results to human annotator benchmarks. Our findings show that human-like robots augment users’ reactions to failures, as shown in users’ visual attention, in comparison to non-humanlike smart-speaker embodiments. The results further suggest that speech behaviours are utilised more in responses to failures when non-human-like designs are present. This is particularly important to robot failure detection mechanisms that may need to consider the robot’s physical design in its failure detection model.
  •  
8.
  • Kontogiorgos, Dimosthenis, 1987-, et al. (författare)
  • Embodiment Effects in Interactions with Failing Robots
  • 2020
  • Ingår i: CHI '20: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. - New York, NY, USA : ACM Digital Library.
  • Konferensbidrag (refereegranskat)abstract
    • The increasing use of robots in real-world applications will inevitably cause users to encounter more failures in interactions. While there is a longstanding effort in bringing human-likeness to robots, how robot embodiment affects users’ perception of failures remains largely unexplored. In this paper, we extend prior work on robot failures by assessing the impact that embodiment and failure severity have on people’s behaviours and their perception of robots. Our findings show that when using a smart-speaker embodiment, failures negatively affect users’ intention to frequently interact with the device, however not when using a human-like robot embodiment. Additionally, users significantly rate the human-like robot higher in terms of perceived intelligence and social presence. Our results further suggest that in higher severity situations, human-likeness is distracting and detrimental to the interaction. Drawing on quantitative findings, we discuss benefits and drawbacks of embodiment in robot failures that occur in guided tasks.
  •  
9.
  • Kontogiorgos, Dimosthenis, 1987-, et al. (författare)
  • Estimating Uncertainty in Task Oriented Dialogue
  • 2019
  • Ingår i: ICMI 2019 - Proceedings of the 2019 International Conference on Multimodal Interaction. - New York, NY, USA : ACM Digital Library. - 9781450368605 ; , s. 414-418
  • Konferensbidrag (refereegranskat)abstract
    • Situated multimodal systems that instruct humans need to handle user uncertainties, as expressed in behaviour, and plan their actions accordingly. Speakers’ decision to reformulate or repair previous utterances depends greatly on the listeners’ signals of uncertainty. In this paper, we estimate uncertainty in a situated guided task, as leveraged in non-verbal cues expressed by the listener, and predict that the speaker will reformulate their utterance. We use a corpus where people instruct how to assemble furniture, and extract their multimodal features. While uncertainty is in cases ver- bally expressed, most instances are expressed non-verbally, which indicates the importance of multimodal approaches. In this work, we present a model for uncertainty estimation. Our findings indicate that uncertainty estimation from non- verbal cues works well, and can exceed human annotator performance when verbal features cannot be perceived.
  •  
10.
  • Kontogiorgos, Dimosthenis, 1987-, et al. (författare)
  • Grounding behaviours with conversational interfaces: effects of embodiment and failures
  • 2021
  • Ingår i: Journal on Multimodal User Interfaces. - : Springer Science and Business Media LLC. - 1783-7677 .- 1783-8738.
  • Tidskriftsartikel (refereegranskat)abstract
    • Conversational interfaces that interact with humans need to continuously establish, maintain and repair common ground in task-oriented dialogues. Uncertainty, repairs and acknowledgements are expressed in user behaviour in the continuous efforts of the conversational partners to maintain mutual understanding. Users change their behaviour when interacting with systems in different forms of embodiment, which affects the abilities of these interfaces to observe users’ recurrent social signals. Additionally, humans are intellectually biased towards social activity when facing anthropomorphic agents or when presented with subtle social cues. Two studies are presented in this paper examining how humans interact in a referential communication task with wizarded interfaces in different forms of embodiment. In study 1 (N = 30), we test whether humans respond the same way to agents, in different forms of embodiment and social behaviour. In study 2 (N = 44), we replicate the same task and agents but introduce conversational failures disrupting the process of grounding. Findings indicate that it is not always favourable for agents to be anthropomorphised or to communicate with non-verbal cues, as human grounding behaviours change when embodiment and failures are manipulated.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-10 av 19

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy