SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Oertel Catharine) "

Sökning: WFRF:(Oertel Catharine)

  • Resultat 1-10 av 34
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Abelho Pereira, André Tiago, et al. (författare)
  • Effects of Different Interaction Contexts when Evaluating Gaze Models in HRI
  • 2020
  • Konferensbidrag (refereegranskat)abstract
    • We previously introduced a responsive joint attention system that uses multimodal information from users engaged in a spatial reasoning task with a robot and communicates joint attention via the robot's gaze behavior [25]. An initial evaluation of our system with adults showed it to improve users' perceptions of the robot's social presence. To investigate the repeatability of our prior findings across settings and populations, here we conducted two further studies employing the same gaze system with the same robot and task but in different contexts: evaluation of the system with external observers and evaluation with children. The external observer study suggests that third-person perspectives over videos of gaze manipulations can be used either as a manipulation check before committing to costly real-time experiments or to further establish previous findings. However, the replication of our original adults study with children in school did not confirm the effectiveness of our gaze manipulation, suggesting that different interaction contexts can affect the generalizability of results in human-robot interaction gaze studies.
  •  
2.
  • Abelho Pereira, André Tiago, et al. (författare)
  • Responsive Joint Attention in Human-Robot Interaction
  • 2019
  • Ingår i: Proceedings 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2019. - : Institute of Electrical and Electronics Engineers (IEEE). ; , s. 1080-1087
  • Konferensbidrag (refereegranskat)abstract
    • Joint attention has been shown to be not only crucial for human-human interaction but also human-robot interaction. Joint attention can help to make cooperation more efficient, support disambiguation in instances of uncertainty and make interactions appear more natural and familiar. In this paper, we present an autonomous gaze system that uses multimodal perception capabilities to model responsive joint attention mechanisms. We investigate the effects of our system on people’s perception of a robot within a problem-solving task. Results from a user study suggest that responsive joint attention mechanisms evoke higher perceived feelings of social presence on scales that regard the direction of the robot’s perception.
  •  
3.
  • Al Moubayed, Samer, et al. (författare)
  • Human-robot Collaborative Tutoring Using Multiparty Multimodal Spoken Dialogue
  • 2014
  • Konferensbidrag (refereegranskat)abstract
    • In this paper, we describe a project that explores a novel experi-mental setup towards building a spoken, multi-modally rich, and human-like multiparty tutoring robot. A human-robotinteraction setup is designed, and a human-human dialogue corpus is collect-ed. The corpus targets the development of a dialogue system platform to study verbal and nonverbaltutoring strategies in mul-tiparty spoken interactions with robots which are capable of spo-ken dialogue. The dialogue task is centered on two participants involved in a dialogueaiming to solve a card-ordering game. Along with the participants sits a tutor (robot) that helps the par-ticipants perform the task, and organizes and balances their inter-action. Differentmultimodal signals captured and auto-synchronized by different audio-visual capture technologies, such as a microphone array, Kinects, and video cameras, were coupled with manual annotations. These are used build a situated model of the interaction based on the participants personalities, their state of attention, their conversational engagement and verbal domi-nance, and how that is correlated with the verbal and visual feed-back, turn-management, and conversation regulatory actions gen-erated by the tutor. Driven by the analysis of the corpus, we will show also the detailed design methodologies for an affective, and multimodally rich dialogue system that allows the robot to meas-ure incrementally the attention states, and the dominance for each participant, allowing the robot head Furhat to maintain a well-coordinated, balanced, and engaging conversation, that attempts to maximize the agreement and the contribution to solve the task. This project sets the first steps to explore the potential of us-ing multimodal dialogue systems to build interactive robots that can serve in educational, team building, and collaborative task solving applications.
  •  
4.
  • Al Moubayed, Samer, et al. (författare)
  • Tutoring Robots: Multiparty Multimodal Social Dialogue With an Embodied Tutor
  • 2014
  • Konferensbidrag (refereegranskat)abstract
    • This project explores a novel experimental setup towards building spoken, multi-modally rich, and human-like multiparty tutoring agent. A setup is developed and a corpus is collected that targets the development of a dialogue system platform to explore verbal and nonverbal tutoring strategies in multiparty spoken interactions with embodied agents. The dialogue task is centered on two participants involved in a dialogue aiming to solve a card-ordering game. With the participants sits a tutor that helps the participants perform the task and organizes and balances their interaction. Different multimodal signals captured and auto-synchronized by different audio-visual capture technologies were coupled with manual annotations to build a situated model of the interaction based on the participants personalities, their temporally-changing state of attention, their conversational engagement and verbal dominance, and the way these are correlated with the verbal and visual feedback, turn-management, and conversation regulatory actions generated by the tutor. At the end of this chapter we discuss the potential areas of research and developments this work opens and some of the challenges that lie in the road ahead.
  •  
5.
  • Altmann, U., et al. (författare)
  • Conversational Involvement and Synchronous Nonverbal Behaviour
  • 2012
  • Ingår i: Cognitive Behavioural Systems. - Berlin, Heidelberg : Springer Berlin/Heidelberg. - 9783642345838 ; , s. 343-352
  • Konferensbidrag (refereegranskat)abstract
    • Measuring the quality of an interaction by means of low-level cues has been the topic of many studies in the last couple of years. In this study we propose a novel method for conversation-quality-assessment. We first test whether manual ratings of conversational involvement and automatic estimation of synchronisation of facial activity are correlated. We hypothesise that the higher the synchrony the higher the involvement. We compare two different synchronisation measures. The first measure is defined as the similarity of facial activity at a given point in time. The second is based on dependence analyses between the facial activity time series of two interlocutors. We found that dependence measure correlates more with conversational involvement than similarity measure.
  •  
6.
  • Edlund, Jens, et al. (författare)
  • Investigating negotiation for load-time in the GetHomeSafe project
  • 2012
  • Ingår i: Proc. of Workshop on Innovation and Applications in Speech Technology (IAST). - Dublin, Ireland. ; , s. 45-48
  • Konferensbidrag (refereegranskat)abstract
    • This paper describes ongoing work by KTH Speech, Music and Hearing in GetHomeSafe, a newly inaugurated EU project in collaboration with DFKI, Nuance, IBM and Daimler. Under the assumption that drivers will utilize technology while driving regardless of legislation, the project aims at finding out how to make the use of in-car technology as safe as possible rather than prohibiting it. We describe the project in general briefly and our role in some more detail, in particular one of our tasks: to build a system that can ask the driver if now is a good time to speak about X? in an unobtrusive manner, and that knows how to deal with rejection, for example by asking the driver to get back when it is a good time or to schedule a time that will be convenient.
  •  
7.
  • Hjalmarsson, Anna, et al. (författare)
  • Gaze direction as a Back-Channel inviting Cue in Dialogue
  • 2012
  • Ingår i: IVA 2012 workshop on Realtime Conversational Virtual Agents. - Santa Cruz, CA, USA.
  • Konferensbidrag (refereegranskat)abstract
    • In this study, we experimentally explore the relationship between gaze direction and backchannels in face-to-face interaction. The overall motivation is to use gaze direction in a virtual agent as a mean to elicit user feedback. The relationship between gaze and backchannels was tested in an experiment in which participants were asked to provide feedback when listening to a story-telling virtual agent. When speaking, the agent shifted her gaze towards the listener at predefined positions in the dialogue. The results show that listeners are more prone to backchannel when the virtual agent’s gaze is directed towards them than when it is directed away. However, there is a high response variability for different dialogue contexts which suggests that the timing of backchannels cannot be explained by gaze direction alone.
  •  
8.
  • Jonell, Patrik, et al. (författare)
  • Crowd-powered design of virtual attentive listeners
  • 2017
  • Ingår i: 17th International Conference on Intelligent Virtual Agents, IVA 2017. - Cham : Springer. - 9783319674001 ; , s. 188-191
  • Konferensbidrag (refereegranskat)abstract
    • This demo presents a web-based system that generates attentive listening behaviours in a virtual agent acquired from audio-visual recordings of attitudinal feedback behaviour of crowdworkers.
  •  
9.
  • Jonell, Patrik, et al. (författare)
  • Crowdsourced Multimodal Corpora Collection Tool
  • 2018
  • Ingår i: Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018). - Paris. ; , s. 728-734
  • Konferensbidrag (refereegranskat)abstract
    • In recent years, more and more multimodal corpora have been created. To our knowledge there is no publicly available tool which allows for acquiring controlled multimodal data of people in a rapid and scalable fashion. We therefore are proposing (1) a novel tool which will enable researchers to rapidly gather large amounts of multimodal data spanning a wide demographic range, and (2) an example of how we used this tool for corpus collection of our "Attentive listener'' multimodal corpus. The code is released under an Apache License 2.0 and available as an open-source repository, which can be found at https://github.com/kth-social-robotics/multimodal-crowdsourcing-tool. This tool will allow researchers to set-up their own multimodal data collection system quickly and create their own multimodal corpora. Finally, this paper provides a discussion about the advantages and disadvantages with a crowd-sourced data collection tool, especially in comparison to a lab recorded corpora.
  •  
10.
  • Jonell, Patrik, et al. (författare)
  • FARMI : A Framework for Recording Multi-Modal Interactions
  • 2018
  • Ingår i: Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018). - Paris : European Language Resources Association. ; , s. 3969-3974
  • Konferensbidrag (refereegranskat)abstract
    • In this paper we present (1) a processing architecture used to collect multi-modal sensor data, both for corpora collection and real-time processing, (2) an open-source implementation thereof and (3) a use-case where we deploy the architecture in a multi-party deception game, featuring six human players and one robot. The architecture is agnostic to the choice of hardware (e.g. microphones, cameras, etc.) and programming languages, although our implementation is mostly written in Python. In our use-case, different methods of capturing verbal and non-verbal cues from the participants were used. These were processed in real-time and used to inform the robot about the participants’ deceptive behaviour. The framework is of particular interest for researchers who are interested in the collection of multi-party, richly recorded corpora and the design of conversational systems. Moreover for researchers who are interested in human-robot interaction the available modules offer the possibility to easily create both autonomous and wizard-of-Oz interactions.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-10 av 34
Typ av publikation
konferensbidrag (29)
tidskriftsartikel (3)
doktorsavhandling (1)
forskningsöversikt (1)
Typ av innehåll
refereegranskat (33)
övrigt vetenskapligt/konstnärligt (1)
Författare/redaktör
Oertel, Catharine (33)
Gustafson, Joakim (16)
Beskow, Jonas (7)
Skantze, Gabriel (6)
Edlund, Jens (5)
Hjalmarsson, Anna (5)
visa fler...
Al Moubayed, Samer (3)
Black, A (3)
Stefanov, Kalin (3)
Johansson, Martin (3)
Bollepalli, Bajibabu (3)
Salvi, Giampiero (2)
Abelho Pereira, Andr ... (2)
Fermoselle, Leonor (2)
Mendelson, Joe (2)
Hussen-Abdelaziz, A. (2)
Koutsombogera, M. (2)
Novikova, J. (2)
Varol, G. (2)
Campbell, N. (2)
Wagner, Petra (2)
Wagner, P. (1)
Heldner, Mattias, 19 ... (1)
Yu, Y (1)
Székely, Eva (1)
Castellano, Ginevra (1)
Skantze, Gabriel, 19 ... (1)
Obaid, Mohammad, 198 ... (1)
Alexanderson, Simon (1)
Lopes, J. D. (1)
Lopes, J. (1)
Koutsombogera, Maria (1)
Altmann, U. (1)
Fallgren, Per (1)
Avramova, Vanya (1)
Peters, Christopher (1)
Malisz, Zofia (1)
Shore, Todd (1)
Gustafsson, Joakim (1)
Götze, Jana (1)
Campbell, Nick (1)
Chetouani, Mohamed (1)
Bystedt, Mattias (1)
Morency, L. -P (1)
Mascarenhas, Samuel (1)
Włodarczak, Marcin (1)
David Lopes, José (1)
Águas Lopes, José Da ... (1)
Gustafsson, Joakim, ... (1)
Tarasov, A. (1)
visa färre...
Lärosäte
Kungliga Tekniska Högskolan (34)
Uppsala universitet (1)
Stockholms universitet (1)
Chalmers tekniska högskola (1)
Språk
Engelska (34)
Forskningsämne (UKÄ/SCB)
Naturvetenskap (26)
Teknik (4)

År

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy