SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Leite Iolanda) "

Sökning: WFRF:(Leite Iolanda)

  • Resultat 1-50 av 97
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Almeida, João Tiago, et al. (författare)
  • Would you help me? : Linking robot's perspective-taking to human prosocial behavior
  • 2023
  • Ingår i: HRI 2023. - New York, NY, USA : Association for Computing Machinery (ACM). ; , s. 388-397
  • Konferensbidrag (refereegranskat)abstract
    • Despite the growing literature on human attitudes toward robots, particularly prosocial behavior, little is known about how robots' perspective-taking, the capacity to perceive and understand the world from other viewpoints, could infuence such attitudes and perceptions of the robot. To make robots and AI more autonomous and self-aware, more researchers have focused on developing cognitive skills such as perspective-taking and theory of mind in robots and AI. The present study investigated whether a robot's perspectivetaking choices could infuence the occurrence and extent of exhibiting prosocial behavior toward the robot.We designed an interaction consisting of a perspective-taking task, where we manipulated how the robot instructs the human to fnd objects by changing its frame of reference and measured the human's exhibition of prosocial behavior toward the robot. In a between-subject study (N=70), we compared the robot's egocentric and addressee-centric instructions against a control condition, where the robot's instructions were object-centric. Participants' prosocial behavior toward the robot was measured using a voluntary data collection session. Our results imply that the occurrence and extent of prosocial behavior toward the robot were signifcantly infuenced by the robot's visuospatial perspective-taking behavior. Furthermore, we observed, through questionnaire responses, that the robot's choice of perspectivetaking could potentially infuence the humans' perspective choices, were they to reciprocate the instructions to the robot.
  •  
2.
  •  
3.
  • Beskow, Jonas, et al. (författare)
  • Preface
  • 2017
  • Ingår i: 17th International Conference on Intelligent Virtual Agents, IVA 2017. - : Springer. - 9783319674001 ; , s. V-VI
  • Konferensbidrag (refereegranskat)
  •  
4.
  • Castellano, Ginevra, et al. (författare)
  • Chairs' Welcome
  • 2023
  • Ingår i: Proceedings HRI '23: ACM/IEEE International Conference on Human-Robot Interaction. - : ACM Press.
  • Konferensbidrag (övrigt vetenskapligt/konstnärligt)
  •  
5.
  • Castellano, Ginevra, et al. (författare)
  • Detecting perceived quality of interaction with a robot using contextual features
  • 2017
  • Ingår i: Autonomous Robots. - : Springer Science and Business Media LLC. - 0929-5593 .- 1573-7527. ; 41:5, s. 1245-1261
  • Tidskriftsartikel (refereegranskat)abstract
    • This work aims to advance the state of the art in exploring the role of task, social context and their interdependencies in the automatic prediction of affective and social dimensions in human-robot interaction. We explored several SVMs-based models with different features extracted from a set of context logs collected in a human-robot interaction experiment where children play a chess game with a social robot. The features include information about the game and the social context at the interaction level (overall features) and at the game turn level (turn-based features). While overall features capture game and social context at the interaction level, turn-based features attempt to encode the dependencies of game and social context at each turn of the game. Results showed that game and social context-based features can be successfully used to predict dimensions of quality of interaction with the robot. In particular, overall features proved to perform equally or better than turn-based features, and game context-based features more effective than social context-based features. Our results show that the interplay between game and social context-based features, combined with features encoding their dependencies, lead to higher recognition performances for a subset of dimensions.
  •  
6.
  • Correia, Filipa, et al. (författare)
  • Exploring Prosociality in Human-Robot Teams
  • 2019
  • Ingår i: HRI '19. - : IEEE. - 9781538685556 ; , s. 143-151
  • Konferensbidrag (refereegranskat)abstract
    • This paper explores the role of prosocial behaviour when people team up with robots in a collaborative game that presents a social dilemma similar to a public goods game. An experiment was conducted with the proposed game in which each participant joined a team with a prosocial robot and a selfish robot. During 5 rounds of the game, each player chooses between contributing to the team goal (cooperate) or contributing to his individual goal (defect). The prosociality level of the robots only affects their strategies to play the game, as one always cooperates and the other always defects. We conducted a user study at the office of a large corporation with 70 participants where we manipulated the game result (winning or losing) in a between-subjects design. Results revealed two important considerations: (1) the prosocial robot was rated more positively in terms of its social attributes than the selfish robot, regardless of the game result; (2) the perception of competence, the responsibility attribution (blame/credit), and the preference for a future partner revealed significant differences only in the losing condition. These results yield important concerns for the creation of robotic partners, the understanding of group dynamics and, from a more general perspective, the promotion of a prosocial society.
  •  
7.
  • Deshmuck, Amol, et al. (författare)
  • Towards Empathic Artificial Tutors
  • 2013
  • Ingår i: Proceedings of the 8th ACM/IEEE international conference on Human-robot interaction, HRI 13. Tokyo, Japan — March 03 - 06, 2013. - : ACM/IEEE. - 9781467330558
  • Konferensbidrag (refereegranskat)abstract
    • In this paper we discuss how the EMOTE project will design, develop and evaluate a new generation of artificial embodied tutors that have perceptive capabilities to engage in empathic interactions with learners in a shared physical space.
  •  
8.
  • Dogan, Fethiye Irmak, et al. (författare)
  • Asking Follow-Up Clarifications to Resolve Ambiguities in Human-Robot Conversation
  • 2022
  • Ingår i: ACM/IEEE International Conference on Human-Robot Interaction. - : IEEE Computer Society. ; , s. 461-469
  • Konferensbidrag (refereegranskat)abstract
    • When a robot aims to comprehend its human partner's request by identifying the referenced objects in Human-Robot Conversation, ambiguities can occur because the environment might contain many similar objects or the objects described in the request might be unknown to the robot. In the case of ambiguities, most of the systems ask users to repeat their request, which assumes that the robot is familiar with all of the objects in the environment. This assumption might lead to task failure, especially in complex real-world environments. In this paper, we address this challenge by presenting an interactive system that asks for follow-up clarifications to disambiguate the described objects using the pieces of information that the robot could understand from the request and the objects in the environment that are known to the robot. To evaluate our system while disambiguating the referenced objects, we conducted a user study with 63 participants. We analyzed the interactions when the robot asked for clarifications and when it asked users to redescribe the same object. Our results show that generating followup clarification questions helped the robot correctly identify the described objects with fewer attempts (i.e., conversational turns). Also, when people were asked clarification questions, they perceived the task as easier, and they evaluated the task understanding and competence of the robot as higher. Our code and anonymized dataset are publicly available11 https://github.com/IrmakDogan/Resolving-Ambiguities. 
  •  
9.
  • Dogan, Fethiye Irmak, et al. (författare)
  • Learning to Generate Unambiguous Spatial Referring Expressions for Real-World Environments
  • 2019
  • Ingår i: IEEE International Conference on Intelligent Robots and Systems. - : Institute of Electrical and Electronics Engineers (IEEE). - 9781728140049 ; , s. 4992-4999
  • Konferensbidrag (refereegranskat)abstract
    • Referring to objects in a natural and unambiguous manner is crucial for effective human-robot interaction. Previous research on learning-based referring expressions has focused primarily on comprehension tasks, while generating referring expressions is still mostly limited to rule-based methods. In this work, we propose a two-stage approach that relies on deep learning for estimating spatial relations to describe an object naturally and unambiguously with a referring expression. We compare our method to the state of the art algorithm in ambiguous environments (e.g., environments that include very similar objects with similar relationships). We show that our method generates referring expressions that people find to be more accurate (30% better) and would prefer to use (32% more often).
  •  
10.
  • Dogan, Fethiye Irmak, et al. (författare)
  • Leveraging Explainability for Understanding Object Descriptions in Ambiguous 3D Environments
  • 2023
  • Ingår i: Frontiers in Robotics and AI. - : Frontiers Media SA. - 2296-9144. ; 9
  • Tidskriftsartikel (refereegranskat)abstract
    • For effective human-robot collaboration, it is crucial for robots to understand requests from users perceiving the three-dimensional space and ask reasonable follow-up questions when there are ambiguities. While comprehending the users’ object descriptions in the requests, existing studies have focused on this challenge for limited object categories that can be detected or localized with existing object detection and localization modules. Further, they have mostly focused on comprehending the object descriptions using flat RGB images without considering the depth dimension. On the other hand, in the wild, it is impossible to limit the object categories that can be encountered during the interaction, and 3-dimensional space perception that includes depth information is fundamental in successful task completion. To understand described objects and resolve ambiguities in the wild, for the first time, we suggest a method leveraging explainability. Our method focuses on the active areas of an RGB scene to find the described objects without putting the previous constraints on object categories and natural language instructions. We further improve our method to identify the described objects considering depth dimension. We evaluate our method in varied real-world images and observe that the regions suggested by our method can help resolve ambiguities. When we compare our method with a state-of-the-art baseline, we show that our method performs better in scenes with ambiguous objects which cannot be recognized by existing object detectors. We also show that using depth features significantly improves performance in scenes where depth data is critical to disambiguate the objects and across our evaluation dataset that contains objects that can be specified with and without the depth dimension.
  •  
11.
  • Dogan, Fethiye Irmak, et al. (författare)
  • Open Challenges on Generating Referring Expressions for Human-Robot Interaction
  • 2020
  • Konferensbidrag (refereegranskat)abstract
    • Effective verbal communication is crucial in human-robot collaboration. When a robot helps its human partner to complete a task with verbal instructions, referring expressions are commonly employed during the interaction. Despite many studies on generating referring expressions, crucial open challenges still remain for effective interaction. In this work, we discuss some of these challenges (i.e., using contextual information, taking users’ perspectives, and handling misinterpretations in an autonomous manner).
  •  
12.
  • Dogan, Fethiye Irmak (författare)
  • Robots That Understand Natural Language Instructions and Resolve Ambiguities
  • 2023
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Verbal communication is a key challenge in human-robot interaction. For effective verbal interaction, understanding natural language instructions and clarifying ambiguous user requests are crucial for robots. In real-world environments, the instructions can be ambiguous for many reasons. For instance, when a user asks the robot to find and bring 'the porcelain mug', the mug might be located in the kitchen cabinet or on the dining room table, depending on whether it is clean or full (semantic ambiguities). Additionally, there can be multiple mugs in the same location, and the robot can disambiguate them by asking follow-up questions based on their distinguishing features, such as their color or spatial relations to other objects (visual ambiguities).While resolving ambiguities, previous works have addressed this problem by only disambiguating the objects in the robot's current view and have not considered ones outside the robot's point of view. To fill in this gap and resolve semantic ambiguities caused by objects possibly being located at multiple places, we present a novel approach by reasoning about their semantic properties. On the other hand, while dealing with ambiguous instructions caused by multiple similar objects in the same location, most of the existing systems ask users to repeat their requests with the assumption that the robot is familiar with all of the objects in the environment. To address this limitation and resolve visual ambiguities, we present an interactive system that asks for follow-up clarifications to disambiguate the described objects using the pieces of information that the robot could understand from the request and the objects in the environment that are known to the robot.In summary, in this thesis, we aim to resolve semantic and visual ambiguities to guide a robot's search for described objects specified in user instructions. With semantic disambiguation, we aim to find described objects' locations across an entire household by leveraging object semantics to form clarifying questions when there are ambiguities. After identifying object locations, with visual disambiguation, we aim to identify the specified object among multiple similar objects located in the same space. To achieve this, we suggest a multi-stage approach where the robot first identifies the objects that are fitting to the user's description, and if there are multiple objects, the robot generates clarification questions by describing each potential target object with its spatial relations to other objects. Our results emphasize the significance of semantic and visual disambiguation for successful task completion and human-robot collaboration.
  •  
13.
  •  
14.
  • Dogan, Fethiye Irmak, et al. (författare)
  • The impact of adding perspective-taking to spatial referencing during human-robot interaction
  • 2020
  • Ingår i: Robotics and Autonomous Systems. - : ELSEVIER. - 0921-8890 .- 1872-793X. ; 134
  • Tidskriftsartikel (refereegranskat)abstract
    • For effective verbal communication in collaborative tasks, robots need to account for the different perspectives of their human partners when referring to objects in a shared space. For example, when a robot helps its partner find correct pieces while assembling furniture, it needs to understand how its collaborator perceives the world and refer to objects accordingly. In this work, we propose a method to endow robots with perspective-taking abilities while spatially referring to objects. To examine the impact of our proposed method, we report the results of a user study showing that when the objects are spatially described from the users' perspectives, participants take less time to find the referred objects, find the correct objects more often and consider the task easier.
  •  
15.
  • Engelhardt, Sara, et al. (författare)
  • Better faulty than sorry : Investigating social recovery strategies to minimize the impact of failure in human-robot interaction
  • 2017
  • Ingår i: WCIHAI 2017 Workshop on Conversational Interruptions in Human-Agent Interactions. - : CEUR-WS. ; , s. 19-27
  • Konferensbidrag (refereegranskat)abstract
    • Failure happens in most social interactions, possibly even more so in interactions between a robot and a human. This paper investigates different failure recovery strategies that robots can employ to minimize the negative effect on people's perception of the robot. A between-subject Wizard-of-Oz experiment with 33 participants was conducted in a scenario where a robot and a human play a collaborative game. The interaction was mainly speech-based and controlled failures were introduced at specific moments. Three types of recovery strategies were investigated, one in each experimental condition: ignore (the robot ignores that a failure has occurred and moves on with the task), apology (the robot apologizes for failing and moves on) and problem-solving (the robot tries to solve the problem with the help of the human). Our results show that the apology-based strategy scored the lowest on measures such as likeability and perceived intelligence, and that the ignore strategy lead to better perceptions of perceived intelligence and animacy than the employed recovery strategies.
  •  
16.
  • Fraune, Marlena R., et al. (författare)
  • Lessons Learned About Designing and Conducting Studies From HRI Experts
  • 2022
  • Ingår i: Frontiers in Robotics and AI. - : Frontiers Media SA. - 2296-9144. ; 8
  • Tidskriftsartikel (refereegranskat)abstract
    • The field of human-robot interaction (HRI) research is multidisciplinary and requires researchers to understand diverse fields including computer science, engineering, informatics, philosophy, psychology, and more disciplines. However, it is hard to be an expert in everything. To help HRI researchers develop methodological skills, especially in areas that are relatively new to them, we conducted a virtual workshop, Workshop Your Study Design (WYSD), at the 2021 International Conference on HRI. In this workshop, we grouped participants with mentors, who are experts in areas like real-world studies, empirical lab studies, questionnaire design, interview, participatory design, and statistics. During and after the workshop, participants discussed their proposed study methods, obtained feedback, and improved their work accordingly. In this paper, we present 1) Workshop attendees' feedback about the workshop and 2) Lessons that the participants learned during their discussions with mentors. Participants' responses about the workshop were positive, and future scholars who wish to run such a workshop can consider implementing their suggestions. The main contribution of this paper is the lessons learned section, where the workshop participants contributed to forming this section based on what participants discovered during the workshop. We organize lessons learned into themes of 1) Improving study design for HRI, 2) How to work with participants - especially children -, 3) Making the most of the study and robot's limitations, and 4) How to collaborate well across fields as they were the areas of the papers submitted to the workshop. These themes include practical tips and guidelines to assist researchers to learn about fields of HRI research with which they have limited experience. We include specific examples, and researchers can adapt the tips and guidelines to their own areas to avoid some common mistakes and pitfalls in their research.
  •  
17.
  • Fraune, M. R., et al. (författare)
  • Workshop YOUR study design! Participatory critique and refinement of participants' studies
  • 2021
  • Ingår i: ACM/IEEE International Conference on Human-Robot Interaction. - New York, NY, USA : IEEE Computer Society. ; , s. 688-690
  • Konferensbidrag (refereegranskat)abstract
    • The purpose of this workshop is to help researchers develop methodological skills, especially in areas that are relatively new to them. With HRI researchers coming from diverse backgrounds in computer science, engineering, informatics, philosophy, psychology, and more disciplines, we can't be expert in everything. In this workshop, participants will be grouped with a mentor to enhance their study design and interdisciplinary work. Participants will submit 4-page papers with a small introduction and detailed method section for a project currently in the design process. In small groups led by a mentor in the area, they will discuss their method and obtain feedback. The workshop will include time to edit and improve the study. Workshop mentors include Drs. Cindy Bethel, Hung Hsuan Huang, Selma Sabanović, Brian Scassellati, Megan Strait, Komatsu Takanori, Leila Takayama, and Ewart de Visser, with expertise in areas of real-world study, empirical lab study, questionnaire design, interview, participatory design, and statistics. 
  •  
18.
  • Galatolo, Alessio, et al. (författare)
  • Personality-Adapted Language Generation for Social Robots
  • 2023
  • Ingår i: 2023 32ND IEEE INTERNATIONAL CONFERENCE ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION, RO-MAN. - : Institute of Electrical and Electronics Engineers (IEEE). - 9798350336702 ; , s. 1800-1807
  • Konferensbidrag (refereegranskat)abstract
    • Previous works in Human-Robot Interaction have demonstrated the positive potential benefit of designing social robots which express specific personalities. In this work, we focus specifically on the adaptation of language (as the choice of words, their order, etc.) following the extraversion trait. We look to investigate whether current language models could support more autonomous generations of such personality-expressive robot output. We examine the performance of two models with user studies evaluating (i) raw text output and (ii) text output when used within multi-modal speech from the Furhat robot. We find that the ability to successfully manipulate perceived extraversion sometimes varies across different dialogue topics. We were able to achieve correct manipulation of robot personality via our language adaptation, but our results suggest further work is necessary to improve the automation and generalisation abilities of these models.
  •  
19.
  • Galatolo, Alessio, et al. (författare)
  • The Right (Wo)Man for the Job? : Exploring the Role of Gender when Challenging Gender Stereotypes with a Social Robot
  • 2022
  • Ingår i: International Journal of Social Robotics. - : Springer Nature. - 1875-4791 .- 1875-4805.
  • Tidskriftsartikel (refereegranskat)abstract
    • Recent works have identified both risks and opportunities afforded by robot gendering. Specifically, robot gendering risks the propagation of harmful gender stereotypes, but may positively influence robot acceptance/impact, and/or actually offer a vehicle with which to educate about and challenge traditional gender stereotypes. Our work sits at the intersection of these ideas, to explore whether robot gendering might impact robot credibility and persuasiveness specifically when that robot is being used to try and dispel gender stereotypes and change interactant attitudes. Whilst we demonstrate no universal impact of robot gendering on first impressions of the robot, we demonstrate complex interactions between robot gendering, interactant gender and observer gender which emerge when the robot engages in challenging gender stereotypes. Combined with previous work, our results paint a mixed picture regarding how best to utilise robot gendering when challenging gender stereotypes this way. Specifically, whilst we find some potential evidence in favour of utilising male presenting robots for maximum impact in this context, we question whether this actually reflects the kind of gender biases we actually set out to challenge with this work. 
  •  
20.
  • Gillet, Sarah, et al. (författare)
  • A Robot Mediated Music Mixing Activity for Promoting Collaboration among Children
  • 2020
  • Ingår i: Companion of the 2020 ACM/IEEE International Conference on Human-Robot Interaction, HRI 2020. - New York, NY, USA : Association for Computing Machinery (ACM). ; , s. 212-214
  • Konferensbidrag (refereegranskat)abstract
    • Since children show favoritism of in-group members over out-group members from the age of five, children that newly arrive in a country or culture might have difficulties to be integrated into the already settled group. To address this problem, we developed a robot-mediated music mixing game for three players that aims to bring together children from the newly arrived and settled groups. We designed a game with the robot's goal in mind and allow the robot to observe the participation of the different players in real-time. With this information, the robot can encourage equal participation in the shared activity by prompting the least active child to act. Preliminary results show that the robot can potentially succeed in influencing participation behavior. These results encourage future work that not only studies the in-game effects but also effects on group dynamics.
  •  
21.
  • Gillet, Sarah, et al. (författare)
  • A social robot mediator to foster collaboration and inclusion among children
  • 2020
  • Ingår i: Robotics. - : MIT Press.
  • Konferensbidrag (refereegranskat)abstract
    • Formation of subgroups and thereby the problem of intergroup bias is well-studied in psychology. Already from the age of five, children can show ingroup preferences. We developed a social robot mediator to explore how a robot could help overcome these intergroup biases, especially for children newly arrived to a country. By utilizing an online evaluation of collaboration levels, we allow the robot to perceive and act upon the current group dynamics. We investigated the effectiveness of the robot’s mediating behavior in a between-subject study with 39 children, of whom 13 children had arrived in Sweden within the last 2 years. Results indicate that the robot could help the process of inclusion by mediating the activity. The robot succeeds in encouraging the newly arrived children to act more outgoing and in increasing collaboration among ingroup children. Further, children show a higher level of prosociality after interacting with the robot. In line with prior work, this study demonstrates the ability of social robotic technology to assist group processes.
  •  
22.
  • Gillet, Sarah (författare)
  • Computational Approaches to Interaction-Shaping Robotics
  • 2024
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • The goal of this thesis is to develop computational approaches generating autonomous social robot behaviors that can interact with multiple people and dynamically adapt to shape their interactions. Positive interactions between people impact their well-being and are essential to a fulfilled and healthy life. In this thesis, we coin the term Interaction-Shaping Robotics (ISR) as the study of robots that shape interactions between other agents, e.g., people, and capture previous efforts from the Human-Robot Interaction (HRI) community and emphasize the potential positive or negative, intended or unintended effects of these robots. Previous efforts have explored phenomena that indicate interaction-shaping capabilities of social robots, however, how to de-velop autonomous social robots that can adapt to positively shape interactions between people based on perceived human-human dynamics remains largely unexplored. In this thesis, we contribute to the technical advancement of social interaction-shaping robots by developing heuristics and machine learning methods and demonstrating their effectiveness in studies with real users. We focus on shaping behaviors, i.e., balancing people’s participation in interactions to foster inclusion among newly-arrived and already present children in a music game and support adult second language learners and native speakers in a language game. Especially when leveraging learning techniques, an effective interaction-shaping robot needs to act socially appropriately. We design heuristics that are appropriate by design and establish the feasibility of autonomy for interaction-shaping robots through minimal perception of group dynamics and simple behavior rules. Allowing for learning behaviors for more complex interactions, we provide a formal definition of the problem of interaction-shaping and show that using imitation learning (IL) or offline reinforcement learning (RL) based on previously collected HRI data is feasible without compromising the interaction. To meet the challenge of acting appropriately, we explore techniques applied prior to deployment when learning offline from data and shielding - a technique from the safe RL community - to eventually allow for learning during deployment in interaction. Overall, this thesis demonstrates the feasibility and promise of computational methods for autonomous interaction-shaping robots and demonstrates that these methods generate effective and appropriate robot behavior when balancing participation to ensure the inclusion of all human group members.
  •  
23.
  • Gillet, Sarah, et al. (författare)
  • Ice-Breakers, Turn-Takers and Fun-Makers : Exploring Robots for Groups with Teenagers
  • 2022
  • Ingår i: 2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN). - : Institute of Electrical and Electronics Engineers (IEEE). ; , s. 1474-1481
  • Konferensbidrag (refereegranskat)abstract
    • Successful, enjoyable group interactions are important in public and personal contexts, especially for teenagers whose peer groups are important for self-identity and selfesteem. Social robots seemingly have the potential to positively shape group interactions, but it seems difficult to effect such impact by designing robot behaviors solely based on related (human interaction) literature. In this article, we take a usercentered approach to explore how teenagers envisage a social robot group assistant degrees. We engaged 16 teenagers in focus groups, interviews, and robot testing to capture their views and reflections about robots for groups. Over the course of a two-week summer school, participants co-designed the action space for such a robot and experienced working with/wizarding it for 10+ hours. This experience further altered and deepened their insights into using robots as group assistants. We report results regarding teenagers' views on the applicability and use of a robot group assistant, how these expectations evolved throughout the study, and their repeat interactions with the robot. Our results indicate that each group moves on a spectrum of need for the robot, reflected in use of the robot more (or less) for ice-breaking, turn-taking, and fun-making as the situation demanded.
  •  
24.
  • Gillet, Sarah, et al. (författare)
  • Interaction-Shaping Robotics : Robots That Influence Interactions between Other Agents
  • 2024
  • Ingår i: ACM Transactions on Human-Robot Interaction. - : Association for Computing Machinery (ACM). - 2573-9522. ; 13:1
  • Tidskriftsartikel (refereegranskat)abstract
    • Work in Human–Robot Interaction (HRI) has investigated interactions between one human and one robot as well as human–robot group interactions. Yet the field lacks a clear definition and understanding of the influence a robot can exert on interactions between other group members (e.g., human-to-human). In this article, we define Interaction-Shaping Robotics (ISR), a subfield of HRI that investigates robots that influence the behaviors and attitudes exchanged between two (or more) other agents. We highlight key factors of interaction-shaping robots that include the role of the robot, the robot-shaping outcome, the form of robot influence, the type of robot communication, and the timeline of the robot’s influence. We also describe three distinct structures of human–robot groups to highlight the potential of ISR in different group compositions and discuss targets for a robot’s interaction-shaping behavior. Finally, we propose areas of opportunity and challenges for future research in ISR.
  •  
25.
  • Gillet, Sarah, et al. (författare)
  • Learning Gaze Behaviors for Balancing Participation in Group Human-Robot Interactions
  • 2022
  • Ingår i: HRI '22: Proceedings of the 2022 ACM/IEEE International Conference on Human-Robot Interaction. - : Institute of Electrical and Electronics Engineers (IEEE). ; , s. 265-274
  • Konferensbidrag (refereegranskat)abstract
    • Robots can affect group dynamics. In particular, prior work has shown that robots that use hand-crafted gaze heuristics can influence human participation in group interactions. However, hand-crafting robot behaviors can be difficult and might have unexpected results in groups. Thus, this work explores learning robot gaze behaviors that balance human participation in conversational interactions. More specifically, we examine two techniques for learning a gaze policy from data: imitation learning (IL) and batch reinforcement learning (RL). First, we formulate the problem of learning a gaze policy as a sequential decision-making task focused on human turn-taking. Second, we experimentally show that IL can be used to combine strategies from hand-crafted gaze behaviors, and we formulate a novel reward function to achieve a similar result using batch RL. Finally, we conduct an offline evaluation of IL and RL policies and compare them via a user study (N=50). The results from the study show that the learned behavior policies did not compromise the interaction. Interestingly, the proposed reward for the RL formulation enabled the robot to encourage participants to take more turns during group human-robot interactions than one of the gaze heuristic behaviors from prior work. Also, the imitation learning policy led to more active participation from human participants than another prior heuristic behavior. 
  •  
26.
  • Gillet, Sarah, et al. (författare)
  • Robot Gaze Can Mediate Participation Imbalance in Groups with Different Skill Levels
  • 2021
  • Ingår i: Proceedings of the 2021 ACM/IEEE International Conference on Human-Robot Interaction. - New York, NY, USA : Association for Computing Machinery. ; , s. 303-311
  • Konferensbidrag (refereegranskat)abstract
    • Many small group activities, like working teams or study groups, have a high dependency on the skill of each group member. Differences in skill level among participants can affect not only the performance of a team but also influence the social interaction of its members. In these circumstances, an active member could balance individual participation without exerting direct pressure on specific members by using indirect means of communication, such as gaze behaviors. Similarly, in this study, we evaluate whether a social robot can balance the level of participation in a language skill-dependent game, played by a native speaker and a second language learner. In a between-subjects study (N = 72), we compared an adaptive robot gaze behavior, that was targeted to increase the level of contribution of the least active player, with a non-adaptive gaze behavior. Our results imply that, while overall levels of speech participation were influenced predominantly by personal traits of the participants, the robot’s adaptive gaze behavior could shape the interaction among participants which lead to more even participation during the game.
  •  
27.
  • Gillet, Sarah, et al. (författare)
  • Shielding for socially appropriate robot listening behaviors
  • 2024
  • Ingår i: 2024 33rd IEEE International Conference on Robot and Human Interactive Communication (RO-MAN).
  • Konferensbidrag (refereegranskat)abstract
    • A crucial part of traditional reinforcement learning (RL) is the initial exploration phase, in which trying available actions randomly is a critical element. As random behavior might be detrimental to a social interaction, this work proposes a novel paradigm for learning social robot behavior--the use of shielding to ensure socially appropriate behavior during exploration and learning. We explore how a data-driven approach for shielding could be used to generate listening behavior. In a video-based user study (N=110), we compare shielded exploration to two other exploration methods. We show that the shielded exploration is perceived as more comforting and appropriate than a straightforward random approach. Based on our findings, we discuss the potential for future work using shielded and socially guided approaches for learning idiosyncratic social robot behaviors through RL.   
  •  
28.
  • Gross, James, Professor, 1975-, et al. (författare)
  • TECoSA – Trends, Drivers, and Strategic Directions for Trustworthy Edge Computing in Industrial Applications
  • 2022
  • Ingår i: INSIGHT. - : Wiley. - 2156-485X .- 2156-4868. ; 25:4, s. 29-34
  • Tidskriftsartikel (refereegranskat)abstract
    • TECoSA – a university-based research center in collaboration with industry – was established early in 2020, focusing on Trustworthy Edge Computing Systems and Applications. This article summarizes and assesses the current trends and drivers regarding edge computing. In our analysis, edge computing provided by mobile network operators will be the initial dominating form of this new computing paradigm for the coming decade. These insights form the basis for the research agenda of the TECoSA center, highlighting more advanced use cases, including AR/VR/Cognitive Assistance, cyber-physical systems, and distributed machine learning. The article further elaborates on the identified strategic directions given these trends, emphasizing testbeds and collaborative multidisciplinary research.
  •  
29.
  • Güneysu Özgür, Arzu, et al. (författare)
  • Designing Tangible Robot Mediated Co-located Games to Enhance Social Inclusion for Neurodivergent Children
  • 2022
  • Ingår i: IDC '22. - New York : Association for Computing Machinery. - 9781450391979 ; , s. 536-543, s. 536-543
  • Konferensbidrag (refereegranskat)abstract
    • Neurodivergent children with cognitive and communicative difficulties often experience a lower level of social integration in comparison to neurotypical children. Therefore it is crucial to understand social inclusion challenges and address exclusion. Since previous work shows that gamified robotic activities have a high potential to enable inclusive and collaborative environments we propose using robot-mediated games for enhancing social inclusion. In this work, we present the design of a multiplayer tangible Pacman game with three different inter-player interaction modalities: semi-dependent collaborative, dependent collaborative, and competitive. The initial usability evaluation and the observations of the experiments show the benefits of the game for creating collaborative and cooperative practices for the players and thus also potential for social interaction and social inclusion. Importantly, we observe that inter-player interaction design affects the communication between the players and their physical interaction with the game. 
  •  
30.
  • Holk, Simon, et al. (författare)
  • PREDILECT: Preferences Delineated with Zero-Shot Language-based Reasoning in Reinforcement Learning
  • 2024
  • Ingår i: HRI 2024 - Proceedings of the 2024 ACM/IEEE International Conference on Human-Robot Interaction. - : Association for Computing Machinery (ACM). ; , s. 259-268
  • Konferensbidrag (refereegranskat)abstract
    • Preference-based reinforcement learning (RL) has emerged as a new field in robot learning, where humans play a pivotal role in shaping robot behavior by expressing preferences on different sequences of state-action pairs. However, formulating realistic policies for robots demands responses from humans to an extensive array of queries. In this work, we approach the sample-efficiency challenge by expanding the information collected per query to contain both preferences and optional text prompting. To accomplish this, we leverage the zero-shot capabilities of a large language model (LLM) to reason from the text provided by humans. To accommodate the additional query information, we reformulate the reward learning objectives to contain flexible highlights - state-action pairs that contain relatively high information and are related to the features processed in a zero-shot fashion from a pretrained LLM. In both a simulated scenario and a user study, we reveal the effectiveness of our work by analyzing the feedback and its implications. Additionally, the collective feedback collected serves to train a robot on socially compliant trajectories in a simulated social navigation landscape. We provide video examples of the trained policies at https://sites.google.com/view/rl-predilect.
  •  
31.
  • Iovino, Matteo, et al. (författare)
  • Interactive Disambiguation for Behavior Tree Execution
  • 2022
  • Ingår i: 2022 IEEE-RAS 21st International Conference on Humanoid Robots (Humanoids). - : Institute of Electrical and Electronics Engineers (IEEE).
  • Konferensbidrag (refereegranskat)abstract
    • Abstract:In recent years, robots are used in an increasing variety of tasks, especially by small- and medium sized enterprises. These tasks are usually fast-changing, they have a collaborative scenario and happen in unpredictable environments with possible ambiguities. It is important to have methods capable of generating robot programs easily, that are made as general as possible by handling uncertainties. We present a system that integrates a method to learn Behavior Trees (BTs) from demonstration for pick and place tasks, with a framework that uses verbal interaction to ask follow-up clarification questions to resolve ambiguities. During the execution of a task, the system asks for user input when there is need to disambiguate an object in the scene, i.e. when the targets of the task are objects of a same type that are present in multiple instances. The integrated system is demonstrated on different scenarios of a pick and place task, with increasing level of ambiguities. The code used for this paper is made publicly available 1 1 https://github.com/matiov/disambiguate-BT-execution.
  •  
32.
  • Irfan, Bahar, et al. (författare)
  • Personalization in Long-Term Human-Robot Interaction
  • 2019
  • Ingår i: HRI '19. - : IEEE. - 9781538685556 ; , s. 685-686
  • Konferensbidrag (refereegranskat)abstract
    • For practical reasons, most human-robot interaction (HRI) studies focus on short-term interactions between humans and robots. However, such studies do not capture the difficulty of sustaining engagement and interaction quality across long-term interactions. Many real-world robot applications will require repeated interactions and relationship-building over the long term, and personalization and adaptation to users will be necessary to maintain user engagement and to build rapport and trust between the user and the robot. This full-day workshop brings together perspectives from a variety of research areas, including companion robots, elderly care, and educational robots, in order to provide a forum for sharing and discussing innovations, experiences, works-in-progress, and best practices which address the challenges of personalization in long-term HRI.
  •  
33.
  • Iucci, Alessandro, et al. (författare)
  • Explainable Reinforcement Learning for Human-Robot Collaboration
  • 2021
  • Ingår i: 2021 20Th International Conference On Advanced Robotics (ICAR). - : Institute of Electrical and Electronics Engineers (IEEE). ; , s. 927-934
  • Konferensbidrag (refereegranskat)abstract
    • Reinforcement learning (RL) is getting popular in the robotics field due to its nature to learn from dynamic environments. However, it is unable to provide explanations of why an output was generated. Explainability becomes therefore important in situations where humans interact with robots, such as in human-robot collaboration (HRC) scenarios. Attempts to address explainability in robotics usually are restricted to explain a specific decision taken by the RL model, but not to understand the complete behavior of the robot. In addition, the explainability methods are restricted to be used by domain experts as queries and responses are not translated to natural language. This work overcomes these limitations by proposing an explainability solution for RL models applied to HRC. It is mainly formed by the adaptation of two methods: (i) Reward decomposition gives an insight into the factors that impacted the robot's choice by decomposing the reward function. It further provides sets of relevant reasons for each decision taken during the robot's operation; (ii) Autonomous policy explanation provides a global explanation of the robot's behavior by answering queries in the form of natural language, thus making understandable to any human user. Experiments in simulated HRC scenarios revealed an increased understanding of the optimal choices made by the robots. Additionally, our solution demonstrated as a powerful debugging tool to find weaknesses in the robot's policy and assist in its improvement.
  •  
34.
  •  
35.
  • Jonell, Patrik, 1988-, et al. (författare)
  • Mechanical Chameleons : Evaluating the effects of a social robot’snon-verbal behavior on social influence
  • 2021
  • Ingår i: Proceedings of SCRITA 2021, a workshop at IEEE RO-MAN 2021.
  • Konferensbidrag (refereegranskat)abstract
    • In this paper we present a pilot study which investigates how non-verbal behavior affects social influence in social robots. We also present a modular system which is capable of controlling the non-verbal behavior based on the interlocutor's facial gestures (head movements and facial expressions) in real time, and a study investigating whether three different strategies for facial gestures ("still", "natural movement", i.e. movements recorded from another conversation, and "copy", i.e. mimicking the user with a four second delay) has any affect on social influence and decision making in a "survival task". Our preliminary results show there was no significant difference between the three conditions, but this might be due to among other things a low number of study participants (12). 
  •  
36.
  • Karlsson, Jesper, et al. (författare)
  • Calibrating Driving Styles in Motion Planning for Autonomous Vehicles
  • Annan publikation (övrigt vetenskapligt/konstnärligt)abstract
    • To display perceivably different driving styles is an important ability for autonomous vehicles in real-life traffic scenarios. By encouraging predictable driving styles, we can promote trust and collaboration between vehicle and human drivers in traffic. However, many motion planners lack the ability to provide different driving styles using one method.As a result, many applications are overly defensive to ensure safety, and can not be tailored to the users' preferences. In this work, we build on our previous works on encoding perceivable driving styles using Signal Temporal Logic (STL) and generating motion planning trajectories using user preferences. We refine our previously proposed spatial constraints based on the Responsibility-Sensitive Safety (RSS) model. We illustrate how the motion planner can be parameterized to produce aggressive, neutral and defensive driving behaviors, respectively.  We evaluate the resulting driving styles on a set of real-life inspired driving scenarios, modeled in the Carla simulator and provide a detailed statistical analysis of the generated trajectories.
  •  
37.
  • Karlsson, Jesper, et al. (författare)
  • Encoding Human Driving Styles in Motion Planning for Autonomous Vehicles
  • 2021
  • Ingår i: 2021 IEEE International Conference on Robotics and Automation (ICRA). - : Institute of Electrical and Electronics Engineers (IEEE). ; , s. 11262-11268
  • Konferensbidrag (refereegranskat)abstract
    • Driving styles play a major role in the acceptance and use of autonomous vehicles. Yet, existing motion planning techniques can often only incorporate simple driving styles that are modeled by the developers of the planner and not tailored to the passenger. We present a new approach to encode human driving styles through the use of signal temporal logic and its robustness metrics. Specifically, we use a penalty structure that can be used in many motion planning frameworks, and calibrate its parameters to model different automated driving styles. We combine this penalty structure with a set of signal temporal logic formula, based on the Responsibility-Sensitive Safety model, to generate trajectories that we expected to correlate with three different driving styles: aggressive, neutral, and defensive. An online study showed that people perceived different parameterizations of the motion planner as unique driving styles, and that most people tend to prefer a more defensive automated driving style, which correlated to their self-reported driving style.
  •  
38.
  • Kennedy, J., et al. (författare)
  • Learning and reusing dialog for repeated interactions with a situated social agent
  • 2017
  • Ingår i: 17th International Conference on Intelligent Virtual Agents, IVA 2017. - Cham : Springer. - 9783319674001 ; , s. 192-204
  • Konferensbidrag (refereegranskat)abstract
    • Content authoring for conversations is a limiting factor in creating verbal interactions with intelligent virtual agents. Building on techniques utilizing semi-situated learning in an incremental crowdworking pipeline, this paper introduces an embodied agent that self-authors its own dialog for social chat. In particular, the autonomous use of crowdworkers is supplemented with a generalization method that borrows and assesses the validity of dialog across conversational states. We argue that the approach offers a community-focused tailoring of dialog responses that is not available in approaches that rely solely on statistical methods across big data. We demonstrate the advantages that this can bring to interactions through data collected from 486 conversations between a situated social agent and 22 users during a 3 week long evaluation period.
  •  
39.
  • Khanna, Parag, et al. (författare)
  • Effects of Explanation Strategies to Resolve Failures in Human-Robot Collaboration
  • 2023
  • Ingår i: 2023 32nd IEEE international conference on robot and human interactive communication, RO-MAN. - : Institute of Electrical and Electronics Engineers (IEEE). ; , s. 1829-1836
  • Konferensbidrag (refereegranskat)abstract
    • Despite significant improvements in robot capabilities, they are likely to fail in human-robot collaborative tasks due to high unpredictability in human environments and varying human expectations. In this work, we explore the role of explanation of failures by a robot in a human-robot collaborative task. We present a user study incorporating common failures in collaborative tasks with human assistance to resolve the failure. In the study, a robot and a human work together to fill a shelf with objects. Upon encountering a failure, the robot explains the failure and the resolution to overcome the failure, either through handovers or humans completing the task. The study is conducted using different levels of robotic explanation based on the failure action, failure cause, and action history, and different strategies in providing the explanation over the course of repeated interaction. Our results show that the success in resolving the failures is not only a function of the level of explanation but also the type of failures. Furthermore, while novice users rate the robot higher overall in terms of their satisfaction with the explanation, their satisfaction is not only a function of the robot's explanation level at a certain round but also the prior information they received from the robot.
  •  
40.
  • Khanna, Parag, et al. (författare)
  • How do Humans take an Object from a Robot : Behavior changes observed in a User Study
  • 2023
  • Ingår i: HAI 2023 - Proceedings of the 11th Conference on Human-Agent Interaction. - : Association for Computing Machinery (ACM). ; , s. 372-374
  • Konferensbidrag (refereegranskat)abstract
    • To facilitate human-robot interaction and gain human trust, a robot should recognize and adapt to changes in human behavior. This work documents different human behaviors observed while taking objects from an interactive robot in an experimental study, categorized across two dimensions: pull force applied and handedness. We also present the changes observed in human behavior upon repeated interaction with the robot to take various objects.
  •  
41.
  • Kontogiorgos, Dimosthenis, 1987-, et al. (författare)
  • Embodiment Effects in Interactions with Failing Robots
  • 2020
  • Ingår i: CHI '20: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. - New York, NY, USA : ACM Digital Library.
  • Konferensbidrag (refereegranskat)abstract
    • The increasing use of robots in real-world applications will inevitably cause users to encounter more failures in interactions. While there is a longstanding effort in bringing human-likeness to robots, how robot embodiment affects users’ perception of failures remains largely unexplored. In this paper, we extend prior work on robot failures by assessing the impact that embodiment and failure severity have on people’s behaviours and their perception of robots. Our findings show that when using a smart-speaker embodiment, failures negatively affect users’ intention to frequently interact with the device, however not when using a human-like robot embodiment. Additionally, users significantly rate the human-like robot higher in terms of perceived intelligence and social presence. Our results further suggest that in higher severity situations, human-likeness is distracting and detrimental to the interaction. Drawing on quantitative findings, we discuss benefits and drawbacks of embodiment in robot failures that occur in guided tasks.
  •  
42.
  • Kucherenko, Taras, 1994- (författare)
  • Developing and evaluating co-speech gesture-synthesis models for embodied conversational agents
  • 2021
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    •  A  large part of our communication is non-verbal:   humans use non-verbal behaviors to express various aspects of our state or intent.  Embodied artificial agents, such as virtual avatars or robots, should also use non-verbal behavior for efficient and pleasant interaction. A core part of non-verbal communication is gesticulation:  gestures communicate a large share of non-verbal content. For example, around 90\% of spoken utterances in descriptive discourse are accompanied by gestures. Since gestures are important, generating co-speech gestures has been an essential task in the Human-Agent Interaction (HAI) and Computer Graphics communities for several decades.  Evaluating the gesture-generating methods has been an equally important and equally challenging part of field development. Consequently, this thesis contributes to both the development and evaluation of gesture-generation models. This thesis proposes three deep-learning-based gesture-generation models. The first model is deterministic and uses only audio and generates only beat gestures.  The second model is deterministic and uses both audio and text, aiming to generate meaningful gestures.  A final model uses both audio and text and is probabilistic to learn the stochastic character of human gesticulation.  The methods have applications to both virtual agents and social robots. Individual research efforts in the field of gesture generation are difficult to compare, as there are no established benchmarks.  To address this situation, my colleagues and I launched the first-ever gesture-generation challenge, which we called the GENEA Challenge.  We have also investigated if online participants are as attentive as offline participants and found that they are both equally attentive provided that they are well paid.   Finally,  we developed a  system that integrates co-speech gesture-generation models into a real-time interactive embodied conversational agent.  This system is intended to facilitate the evaluation of modern gesture generation models in interaction. To further advance the development of capable gesture-generation methods, we need to advance their evaluation, and the research in the thesis supports an interpretation that evaluation is the main bottleneck that limits the field.  There are currently no comprehensive co-speech gesture datasets, which should be large, high-quality, and diverse. In addition, no strong objective metrics are yet available.  Creating speech-gesture datasets and developing objective metrics are highlighted as essential next steps for further field development.
  •  
43.
  • Kucherenko, Taras, 1994-, et al. (författare)
  • Gesticulator : A framework for semantically-aware speech-driven gesture generation
  • 2020
  • Ingår i: ICMI '20: Proceedings of the 2020 International Conference on Multimodal Interaction. - New York, NY, USA : Association for Computing Machinery (ACM).
  • Konferensbidrag (refereegranskat)abstract
    • During speech, people spontaneously gesticulate, which plays akey role in conveying information. Similarly, realistic co-speechgestures are crucial to enable natural and smooth interactions withsocial agents. Current end-to-end co-speech gesture generationsystems use a single modality for representing speech: either au-dio or text. These systems are therefore confined to producingeither acoustically-linked beat gestures or semantically-linked ges-ticulation (e.g., raising a hand when saying “high”): they cannotappropriately learn to generate both gesture types. We present amodel designed to produce arbitrary beat and semantic gesturestogether. Our deep-learning based model takes both acoustic andsemantic representations of speech as input, and generates gesturesas a sequence of joint angle rotations as output. The resulting ges-tures can be applied to both virtual agents and humanoid robots.Subjective and objective evaluations confirm the success of ourapproach. The code and video are available at the project page svito-zar.github.io/gesticula
  •  
44.
  • Latupeirissa, Adrian Benigno (författare)
  • From Motion Pictures to Robotic Features : Adopting film sound design practices to foster sonic expression in social robotics through interactive sonification
  • 2024
  • Konstnärligt arbete (övrigt vetenskapligt/konstnärligt)abstract
    • This dissertation investigates the role of sound design in social robotics, drawing inspiration from robot depictions in science-fiction films. It addresses the limitations of robots’ movements and expressive behavior by integrating principles from film sound design, seeking to improve human-robot interaction through expressive gestures and non-verbal sounds.The compiled works are structured into two parts. The first part focuses on perceptual studies, exploring how people perceive non-verbal sounds displayed by a Pepper robot related to its movement. These studies highlighted preferences for more refined sound models, subtle sounds that blend with ambient sounds, and sound characteristics matching the robot’s visual attributes. This part also resulted in a programming interface connecting the Pepper robot with sound production tools.The second part focuses on a structured analysis of robot sounds in films, revealing three narrative themes related to robot sounds in films with implications for social robotics. The first theme involves sounds associated with the physical attributes of robots, encompassing sub-themes of sound linked to robot size, exposed mechanisms, build quality, and anthropomorphic traits. The second theme delves into sounds accentuating robots’ internal workings, with sub-themes related to learning and decision-making processes. Lastly, the third theme revolves around sounds utilized in robots’ interactions with other characters within the film scenes.Based on these works, the dissertation discusses sound design recommendations for social robotics inspired by practices in film sound design. These recommendations encompass selecting the appropriate sound materials and sonic characteristics such as pitch and timbre, employing movement sound for effective communication and emotional expression, and integrating narrative and context into the interaction.
  •  
45.
  • Li, Rui, et al. (författare)
  • Comparing Human-Robot Proxemics between Virtual Reality and the Real World
  • 2019
  • Ingår i: HRI '19. - : IEEE. - 9781538685556 ; , s. 431-439
  • Konferensbidrag (refereegranskat)abstract
    • Virtual Reality (VR) can greatly benefit Human-Robot Interaction (HRI) as a tool to effectively iterate across robot designs. However, possible system limitations of VR could influence the results such that they do not fully reflect real-life encounters with robots. In order to better deploy VR in HRI, we need to establish a basic understanding of what the differences are between HRI studies in the real world and in VR. This paper investigates the differences between the real life and VR with a focus on proxemic preferences, in combination with exploring the effects of visual familiarity and spatial sound within the VR experience. Results suggested that people prefer closer interaction distances with a real, physical robot than with a virtual robot in VR. Additionally, the virtual robot was perceived as more discomforting than the real robot, which could result in the differences in proxemics. Overall, these results indicate that the perception of the robot has to be evaluated before the interaction can be studied. However, the results also suggested that VR settings with different visual familiarities are consistent with each other in how they affect HRI proxemics and virtual robot perceptions, indicating the freedom to study HRI in various scenarios in VR. The effect of spatial sound in VR drew a more complex picture and thus calls for more in-depth research to understand its influence on HRI in VR.
  •  
46.
  • Linard, Alexis, et al. (författare)
  • Formalizing Trajectories in Human-Robot Encounters via Probabilistic STL Inference
  • 2021
  • Ingår i: 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). - : Institute of Electrical and Electronics Engineers (IEEE). ; , s. 9857-9862
  • Konferensbidrag (refereegranskat)abstract
    • In this paper, we are interested in formalizing human trajectories in human-robot encounters. We consider a particular case where a human and a robot walk towards each other. A question that arises is whether, when, and how humans will deviate from their trajectory to avoid a collision. These human trajectories can then be used to generate socially acceptable robot trajectories. To model these trajectories, we propose a data-driven algorithm to extract a formal specification expressed in Signal Temporal Logic with probabilistic predicates. We evaluated our method on trajectories collected through an online study where participants had to avoid colliding with a robot in a shared environment. Further, we demonstrate that probabilistic STL is a suitable formalism to depict human behavior, choices and preferences in specific scenarios of social navigation.
  •  
47.
  • Linard, Alexis, et al. (författare)
  • Inference of Multi-Class STL Specifications for Multi-Label Human-Robot Encounters
  • 2022
  • Ingår i: 2022 IEEE/RSJ international conference on intelligent robots and systems (IROS). - : Institute of Electrical and Electronics Engineers (IEEE). ; , s. 1305-1311
  • Konferensbidrag (refereegranskat)abstract
    • This paper is interested in formalizing human trajectories in human-robot encounters. Inspired by robot navigation tasks in human-crowded environments, we consider the case where a human and a robot walk towards each other, and where humans have to avoid colliding with the incoming robot. Further, humans may describe different behaviors, ranging from being in a hurry/minimizing completion time to maximizing safety. We propose a decision tree-based algorithm to extract STL formulae from multi-label data. Our inference algorithm learns STL specifications from data containing multiple classes, where instances can be labelled by one or many classes. We base our evaluation on a dataset of trajectories collected through an online study reproducing human-robot encounters.
  •  
48.
  • Linard, Alexis, et al. (författare)
  • Real-time RRT* with Signal Temporal Logic Preferences
  • 2023
  • Ingår i: 2023 IEEE/RSJ international conference on intelligent robots and systems (IROS). - : IEEE.
  • Konferensbidrag (övrigt vetenskapligt/konstnärligt)abstract
    • Signal Temporal Logic (STL) is a rigorous specification language that allows one to express various spatiotemporal requirements and preferences. Its semantics (called robustness) allows quantifying to what extent are the STL specifications met. In this work, we focus on enabling STL constraints and preferences in the Real-Time Rapidly ExploringRandom Tree (RT-RRT*) motion planning algorithm in an environment with dynamic obstacles. We propose a cost function that guides the algorithm towards the asymptotically most robust solution, i.e. a plan that maximally adheres to the STL specification. In experiments, we applied our method to a social navigation case, where the STL specification captures spatio-temporal preferences on how a mobile robot should avoid an incoming human in a shared space. Our results show that our approach leads to plans adhering to the STL specification, while ensuring efficient cost computation.
  •  
49.
  • Marta, Daniel, et al. (författare)
  • Aligning Human Preferences with Baseline Objectives in Reinforcement Learning
  • 2023
  • Ingår i: 2023 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2023). - : Institute of Electrical and Electronics Engineers (IEEE).
  • Konferensbidrag (refereegranskat)abstract
    • Practical implementations of deep reinforcement learning (deep RL) have been challenging due to an amplitude of factors, such as designing reward functions that cover every possible interaction. To address the heavy burden of robot reward engineering, we aim to leverage subjective human preferences gathered in the context of human-robot interaction, while taking advantage of a baseline reward function when available. By considering baseline objectives to be designed beforehand, we are able to narrow down the policy space, solely requesting human attention when their input matters the most. To allow for control over the optimization of different objectives, our approach contemplates a multi-objective setting. We achieve human-compliant policies by sequentially training an optimal policy from a baseline specification and collecting queries on pairs of trajectories. These policies are obtained by training a reward estimator to generate Pareto optimal policies that include human preferred behaviours. Our approach ensures sample efficiency and we conducted a user study to collect real human preferences, which we utilized to obtain a policy on a social navigation environment.
  •  
50.
  • Marta, Daniel, et al. (författare)
  • Human-Feedback Shield Synthesis for Perceived Safety in Deep Reinforcement Learning
  • 2022
  • Ingår i: IEEE Robotics and Automation Letters. - : Institute of Electrical and Electronics Engineers (IEEE). - 2377-3766 .- 2377-3774. ; 7:1, s. 406-413
  • Tidskriftsartikel (övrigt vetenskapligt/konstnärligt)abstract
    • Despite the successes of deep reinforcement learning (RL), it is still challenging to obtain safe policies. Formal verifi- cation approaches ensure safety at all times, but usually overly restrict the agent’s behaviors, since they assume adversarial behavior of the environment. Instead of assuming adversarial behavior, we suggest to focus on perceived safety instead, i.e., policies that avoid undesired behaviors while having a desired level of conservativeness. To obtain policies that are perceived as safe, we propose a shield synthesis framework with two distinct loops: (1) an inner loop that trains policies with a set of actions that is constrained by shields whose conservativeness is parameterized, and (2) an outer loop that presents example rollouts of the policy to humans and collects their feedback to update the parameters of the shields in the inner loop. We demonstrate our approach on a RL benchmark of Lunar landing and a scenario in which a mobile robot navigates around humans. For the latter, we conducted two user studies to obtain policies that were perceived as safe. Our results indicate that our framework converges to policies that are perceived as safe, is robust against noisy feedback, and can query feedback for multiple policies at the same time.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-50 av 97
Typ av publikation
konferensbidrag (66)
tidskriftsartikel (17)
doktorsavhandling (7)
annan publikation (5)
konstnärligt arbete (2)
forskningsöversikt (1)
visa fler...
bokkapitel (1)
visa färre...
Typ av innehåll
refereegranskat (80)
övrigt vetenskapligt/konstnärligt (17)
Författare/redaktör
Leite, Iolanda (95)
Torre, Ilaria (15)
Gillet, Sarah (14)
van Waveren, Sanne (14)
Tumova, Jana (13)
Dogan, Fethiye Irmak (11)
visa fler...
Pek, Christian (10)
Winkle, Katie (8)
Melsión, Gaspar Isaa ... (7)
Beskow, Jonas (5)
Yadollahi, Elmira (5)
Kjellström, Hedvig, ... (5)
Parreira, Maria Tere ... (5)
Marta, Daniel (5)
Holk, Simon (5)
Kragic, Danica, 1971 ... (4)
Paiva, Ana (4)
Bresin, Roberto, 196 ... (4)
Carter, Elizabeth (4)
Stower, Rebecca (4)
Castellano, Ginevra (3)
Kucherenko, Taras, 1 ... (3)
Smith, Christian (3)
Peters, Christopher (3)
Azizpour, Hossein, 1 ... (3)
Björkman, Mårten, 19 ... (3)
Vázquez, Marynel (3)
Li, B. (2)
Inam, Rafia (2)
Abelho Pereira, Andr ... (2)
Gustafson, Joakim (2)
Vinuesa, Ricardo (2)
Alexanderson, Simon (2)
Henter, Gustav Eje, ... (2)
Jensfelt, Patric, 19 ... (2)
Balaam, Madeline (2)
Nerini, Francesco Fu ... (2)
Karlsson, Jesper (2)
McMillan, Donald (2)
Romeo, Marta (2)
Lemaignan, Séverin (2)
Bartoli, Ermanno (2)
Sun, M (2)
Latupeirissa, Adrian ... (2)
Cumbal, Ronald (2)
Güneysu Özgür, Arzu (2)
Zojaji, Sahba (2)
Galatolo, Alessio (2)
Sibirtseva, Elena (2)
Hata, Alberto (2)
visa färre...
Lärosäte
Kungliga Tekniska Högskolan (94)
Uppsala universitet (6)
Stockholms universitet (3)
Chalmers tekniska högskola (3)
Högskolan i Skövde (2)
Göteborgs universitet (1)
visa fler...
Umeå universitet (1)
Örebro universitet (1)
Linköpings universitet (1)
Mittuniversitetet (1)
visa färre...
Språk
Engelska (97)
Forskningsämne (UKÄ/SCB)
Teknik (58)
Naturvetenskap (52)
Samhällsvetenskap (11)
Humaniora (6)
Medicin och hälsovetenskap (2)

År

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy