SwePub
Sök i SwePub databas

  Extended search

Träfflista för sökning "WFRF:(Skantze Gabriel 1975 ) "

Search: WFRF:(Skantze Gabriel 1975 )

  • Result 1-10 of 62
Sort/group result
   
EnumerationReferenceCoverFind
1.
  • Ahlberg, Sofie, et al. (author)
  • Co-adaptive Human-Robot Cooperation : Summary and Challenges
  • 2022
  • In: Unmanned Systems. - : World Scientific Pub Co Pte Ltd. - 2301-3850 .- 2301-3869. ; 10:02, s. 187-203
  • Journal article (peer-reviewed)abstract
    • The work presented here is a culmination of developments within the Swedish project COIN: Co-adaptive human-robot interactive systems, funded by the Swedish Foundation for Strategic Research (SSF), which addresses a unified framework for co-adaptive methodologies in human-robot co-existence. We investigate co-adaptation in the context of safe planning/control, trust, and multi-modal human-robot interactions, and present novel methods that allow humans and robots to adapt to one another and discuss directions for future work.
  •  
2.
  • Ashkenazi, Shaul, et al. (author)
  • Goes to the Heart: Speaking the User's Native Language
  • 2024
  • In: HRI 2024 Companion - Companion of the 2024 ACM/IEEE International Conference on Human-Robot Interaction. - : Association for Computing Machinery (ACM). ; , s. 214-218
  • Conference paper (peer-reviewed)abstract
    • We are developing a social robot to work alongside human support workers who help new arrivals in a country to navigate the necessary bureaucratic processes in that country. The ultimate goal is to develop a robot that can support refugees and asylum seekers in the UK. As a first step, we are targeting a less vulnerable population with similar support needs: international students in the University of Glasgow. As the target users are in a new country and may be in a state of stress when they seek support, forcing them to communicate in a foreign language will only fuel their anxiety, so a crucial aspect of the robot design is that it should speak the users' native language if at all possible. We provide a technical description of the robot hardware and software, and describe the user study that will shortly be carried out. At the end, we explain how we are engaging with refugee support organisations to extend the robot into one that can also support refugees and asylum seekers.
  •  
3.
  • Axelsson, Agnes, 1992- (author)
  • Adaptive Robot Presenters : Modelling Grounding in Multimodal Interaction
  • 2023
  • Doctoral thesis (other academic/artistic)abstract
    • This thesis addresses the topic of grounding in human-robot interaction, that is, the process by which the human and robot can ensure mutual understanding. To explore this topic, the scenario of a robot holding a presentation to a human audience is used, where the robot has to process multimodal feedback from the human in order to adapt the presentation to the human's level of understanding.First, the use of behaviour trees to model real-time interactive processes of the presentation is addressed. A system based on the behaviour tree architecture is used in a semi-automated Wizard-of-oz experiment, showing that audience members prefer an adaptive system to a non-adaptive alternative.Next, the thesis addresses the use of knowledge graphs to represent the content of the presentation given by the robot. By building a small, local knowledge graph containing properties (edges) that represent facts about the presentation, the system can iterate over that graph and consistently find ways to refer to entities by referring to previously grounded content. A system based on this architecture is implemented, and an evaluation using simulated users is presented. The results show that crowdworkers comparing different adaptation strategies are sensitive to the types of adaptation enabled by the knowledge graph approach.In a face-to-face presentation setting, feedback from the audience can potentially be expressed through various modalities, including speech, head movements, gaze, facial gestures and body pose. The thesis explores how such feedback can be automatically classified. A corpus of human-robot interactions is annotated, and models are trained to classify human feedback as positive, negative or neutral. A relatively high accuracy is achieved by training simple classifiers with signals found mainly in the speech and head movements.When knowledge graphs are used as the underlying representation of the system's presentation, some consistent way of generating text, that can be turned into speech, is required. This graph-to-text problem is explored by proposing several methods, both template-based and methods based on zero-shot generation using large language models (LLMs). A novel evaluation method using a combination of factual, counter-factual and fictional graphs is proposed. Finally, the thesis presents and evaluates a fully automated system using all of the components above. The results show that audience members prefer the adaptive system to a non-adaptive system, matching the results from the beginning of the thesis. However, we note that clear learning results are not found, which means that the entertainment aspects of the presentation are perhaps more prominent than the learning aspects.
  •  
4.
  • Axelsson, Agnes, 1992-, et al. (author)
  • Do you follow? : A fully automated system for adaptive robot presenters
  • 2023
  • In: HRI 2023. - New York, NY, USA : Association for Computing Machinery (ACM). ; , s. 102-111
  • Conference paper (peer-reviewed)abstract
    • An interesting application for social robots is to act as a presenter, for example as a museum guide. In this paper, we present a fully automated system architecture for building adaptive presentations for embodied agents. The presentation is generated from a knowledge graph, which is also used to track the grounding state of information, based on multimodal feedback from the user. We introduce a novel way to use large-scale language models (GPT-3 in our case) to lexicalise arbitrary knowledge graph triples, greatly simplifying the design of this aspect of the system. We also present an evaluation where 43 participants interacted with the system. The results show that users prefer the adaptive system and consider it more human-like and flexible than a static version of the same system, but only partial results are seen in their learning of the facts presented by the robot.
  •  
5.
  • Axelsson, Agnes, 1992-, et al. (author)
  • Modeling Feedback in Interaction With Conversational Agents—A Review
  • 2022
  • In: Frontiers in Computer Science. - : Frontiers Media SA. - 2624-9898. ; 4
  • Research review (peer-reviewed)abstract
    • Intelligent agents interacting with humans through conversation (such as a robot, embodied conversational agent, or chatbot) need to receive feedback from the human to make sure that its communicative acts have the intended consequences. At the same time, the human interacting with the agent will also seek feedback, in order to ensure that her communicative acts have the intended consequences. In this review article, we give an overview of past and current research on how intelligent agents should be able to both give meaningful feedback toward humans, as well as understanding feedback given by the users. The review covers feedback across different modalities (e.g., speech, head gestures, gaze, and facial expression), different forms of feedback (e.g., backchannels, clarification requests), and models for allowing the agent to assess the user's level of understanding and adapt its behavior accordingly. Finally, we analyse some shortcomings of current approaches to modeling feedback, and identify important directions for future research.
  •  
6.
  • Axelsson, Agnes, 1992-, et al. (author)
  • Multimodal User Feedback During Adaptive Robot-Human Presentations
  • 2022
  • In: Frontiers in Computer Science. - : Frontiers Media SA. - 2624-9898. ; 3
  • Journal article (peer-reviewed)abstract
    • Feedback is an essential part of all communication, and agents communicating with humans must be able to both give and receive feedback in order to ensure mutual understanding. In this paper, we analyse multimodal feedback given by humans towards a robot that is presenting a piece of art in a shared environment, similar to a museum setting. The data analysed contains both video and audio recordings of 28 participants, and the data has been richly annotated both in terms of multimodal cues (speech, gaze, head gestures, facial expressions, and body pose), as well as the polarity of any feedback (negative, positive, or neutral). We train statistical and machine learning models on the dataset, and find that random forest models and multinomial regression models perform well on predicting the polarity of the participants' reactions. An analysis of the different modalities shows that most information is found in the participants' speech and head gestures, while much less information is found in their facial expressions, body pose and gaze. An analysis of the timing of the feedback shows that most feedback is given when the robot makes pauses (and thereby invites feedback), but that the more exact timing of the feedback does not affect its meaning.
  •  
7.
  • Axelsson, Agnes, 1992-, et al. (author)
  • Robots in autonomous buses: Who hosts when no human is there?
  • 2024
  • In: HRI 2024 Companion - Companion of the 2024 ACM/IEEE International Conference on Human-Robot Interaction. - : Association for Computing Machinery (ACM). ; , s. 1278-1280
  • Conference paper (peer-reviewed)abstract
    • In mid-2023, we performed an experiment in autonomous buses in Stockholm, Sweden, to evaluate the role that social robots might have in such settings, and their effects on passengers' feeling of safety and security, given the absence of human drivers or clerks. To address the situations that may occur in autonomous public transit (APT), we compared an embodied agent to a disembodied agent. In this video publication, we showcase some of the things that worked with the interactions we created, and some problematic issues that we had not anticipated.
  •  
8.
  • Axelsson, Agnes, 1992-, et al. (author)
  • Using Large Language Models for Zero-Shot Natural Language Generation from Knowledge Graphs
  • 2023
  • In: Proceedings of the Workshop on Multimodal, Multilingual Natural Language Generation and Multilingual WebNLG Challenge (MM-NLG 2023). - : Association for Computational Linguistics (ACL). ; , s. 39-54
  • Conference paper (peer-reviewed)abstract
    • In any system that uses structured knowledgegraph (KG) data as its underlying knowledge representation, KG-to-text generation is a useful tool for turning parts of the graph data into text that can be understood by humans. Recent work has shown that models that make use of pretraining on large amounts of text data can perform well on the KG-to-text task, even with relatively little training data on the specific graph-to-text task. In this paper, we build on this concept by using large language models to perform zero-shot generation based on nothing but the model’s understanding of the triple structure from what it can read. We show that ChatGPT achieves near state-of-the-art performance on some measures of the WebNLG 2020 challenge, but falls behind on others. Additionally, we compare factual, counter-factual and fictional statements, and show that there is a significant connection between what the LLM already knows about the data it is parsing and the quality of the output text.
  •  
9.
  • Axelsson, Nils, 1992-, et al. (author)
  • Modelling Adaptive Presentations in Human-Robot Interaction using Behaviour Trees
  • 2019
  • In: 20th Annual Meeting of the Special Interest Group on Discourse and Dialogue. - Stroudsburg, PA : Association for Computational Linguistics (ACL). ; , s. 345-352
  • Conference paper (peer-reviewed)abstract
    • In dialogue, speakers continuously adapt their speech to accommodate the listener, based on the feedback they receive. In this paper, we explore the modelling of such behaviours in the context of a robot presenting a painting. A Behaviour Tree is used to organise the behaviour on different levels, and allow the robot to adapt its behaviour in real-time; the tree organises engagement, joint attention, turn-taking, feedback and incremental speech processing. An initial implementation of the model is presented, and the system is evaluated in a user study, where the adaptive robot presenter is compared to a non-adaptive version. The adaptive version is found to be more engaging by the users, although no effects are found on the retention of the presented material.
  •  
10.
  • Axelsson, Nils, 1992-, et al. (author)
  • Using knowledge graphs and behaviour trees for feedback-aware presentation agents
  • 2020
  • In: Proceedings of Intelligent Virtual Agents 2020. - New York, NY, USA : Association for Computing Machinery (ACM).
  • Conference paper (peer-reviewed)abstract
    • In this paper, we address the problem of how an interactive agent (such as a robot) can present information to an audience and adaptthe presentation according to the feedback it receives. We extend a previous behaviour tree-based model to generate the presentation from a knowledge graph (Wikidata), which allows the agent to handle feedback incrementally, and adapt accordingly. Our main contribution is using this knowledge graph not just for generating the system’s dialogue, but also as the structure through which short-term user modelling happens. In an experiment using simulated users and third-party observers, we show that referring expressions generated by the system are rated more highly when they adapt to the type of feedback given by the user, and when they are based on previously grounded information as opposed to new information.
  •  
Skapa referenser, mejla, bekava och länka
  • Result 1-10 of 62
Type of publication
conference paper (46)
journal article (10)
doctoral thesis (4)
research review (2)
Type of content
peer-reviewed (56)
other academic/artistic (6)
Author/Editor
Skantze, Gabriel, 19 ... (59)
Axelsson, Agnes, 199 ... (6)
Peters, Christopher (5)
Gustafson, Joakim (3)
Yang, Fangkai (3)
Li, Chengjie (3)
show more...
Skantze, Gabriel, Pr ... (3)
Székely, Eva (2)
Gao, Alex Yuan (2)
Traum, David (2)
Avramova, Vanya (2)
Axelsson, Nils, 1992 ... (2)
Romeo, Marta (2)
Shore, Todd (2)
Kragic, Danica, 1971 ... (1)
Abelho Pereira, Andr ... (1)
Oertel, Catharine (1)
Dimarogonas, Dimos V ... (1)
Beskow, Jonas (1)
Ahlberg, Sofie (1)
Axelsson, Agnes (1)
Yu, Pian (1)
Shaw Cortez, Wencesl ... (1)
Ghadirzadeh, Ali (1)
Castellano, Ginevra (1)
Alexanderson, Simon (1)
Lopes, J. (1)
Albert, Saul (1)
Carlson, Rolf (1)
Maraev, Vladislav, 1 ... (1)
Hough, Julian (1)
Fallgren, Per (1)
Pereira, André (1)
Ashkenazi, Shaul (1)
Stuart-Smith, Jane (1)
Foster, Mary Ellen (1)
Boye, Johan (1)
André, Elisabeth, Pr ... (1)
Buschmeier, Hendrik (1)
Vaddadi, Bhavana, Ph ... (1)
Bogdan, Cristian M, ... (1)
Aylett, Matthew Pete ... (1)
McMillan, Donald (1)
Fischer, Joel (1)
Reyes-Cruz, Gisela (1)
Gkatzia, Dimitra (1)
Dogan, Fethiye Irmak (1)
Wennberg, Ulme (1)
Hernandez Garcia, Da ... (1)
Blomsma, Peter (1)
show less...
University
Royal Institute of Technology (62)
Uppsala University (2)
University of Gothenburg (1)
Language
English (62)
Research subject (UKÄ/SCB)
Natural sciences (55)
Engineering and Technology (8)
Humanities (3)

Year

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Close

Copy and save the link in order to return to this view