SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Skantze Gabriel) "

Sökning: WFRF:(Skantze Gabriel)

  • Resultat 1-10 av 142
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Ahlberg, Sofie, et al. (författare)
  • Co-adaptive Human-Robot Cooperation : Summary and Challenges
  • 2022
  • Ingår i: Unmanned Systems. - : World Scientific Pub Co Pte Ltd. - 2301-3850 .- 2301-3869. ; 10:02, s. 187-203
  • Tidskriftsartikel (refereegranskat)abstract
    • The work presented here is a culmination of developments within the Swedish project COIN: Co-adaptive human-robot interactive systems, funded by the Swedish Foundation for Strategic Research (SSF), which addresses a unified framework for co-adaptive methodologies in human-robot co-existence. We investigate co-adaptation in the context of safe planning/control, trust, and multi-modal human-robot interactions, and present novel methods that allow humans and robots to adapt to one another and discuss directions for future work.
  •  
2.
  • Al Moubayed, Samer, et al. (författare)
  • Effects of 2D and 3D Displays on Turn-taking Behavior in Multiparty Human-Computer Dialog
  • 2011
  • Ingår i: SemDial 2011. - Los Angeles, CA. ; , s. 192-193
  • Konferensbidrag (refereegranskat)abstract
    • The perception of gaze from an animated agenton a 2D display has been shown to suffer fromthe Mona Lisa effect, which means that exclusive mutual gaze cannot be established if there is more than one observer. In this study, we investigate this effect when it comes to turntaking control in a multi-party human-computerdialog setting, where a 2D display is compared to a 3D projection. The results show that the 2D setting results in longer response times andlower turn-taking accuracy.
  •  
3.
  • Al Moubayed, Samer, 1982-, et al. (författare)
  • Furhat : A Back-projected Human-like Robot Head for Multiparty Human-Machine Interaction
  • 2012
  • Ingår i: Cognitive Behavioural Systems. - Berlin, Heidelberg : Springer Berlin/Heidelberg. - 9783642345838 ; , s. 114-130
  • Konferensbidrag (refereegranskat)abstract
    • In this chapter, we first present a summary of findings from two previous studies on the limitations of using flat displays with embodied conversational agents (ECAs) in the contexts of face-to-face human-agent interaction. We then motivate the need for a three dimensional display of faces to guarantee accurate delivery of gaze and directional movements and present Furhat, a novel, simple, highly effective, and human-like back-projected robot head that utilizes computer animation to deliver facial movements, and is equipped with a pan-tilt neck. After presenting a detailed summary on why and how Furhat was built, we discuss the advantages of using optically projected animated agents for interaction. We discuss using such agents in terms of situatedness, environment, context awareness, and social, human-like face-to-face interaction with robots where subtle nonverbal and social facial signals can be communicated. At the end of the chapter, we present a recent application of Furhat as a multimodal multiparty interaction system that was presented at the London Science Museum as part of a robot festival,. We conclude the paper by discussing future developments, applications and opportunities of this technology.
  •  
4.
  • Al Moubayed, Samer, et al. (författare)
  • Furhat goes to Robotville: a large-scale multiparty human-robot interaction data collection in a public space
  • 2012
  • Ingår i: Proc of LREC Workshop on Multimodal Corpora. - Istanbul, Turkey.
  • Konferensbidrag (refereegranskat)abstract
    • In the four days of the Robotville exhibition at the London Science Museum, UK, during which the back-projected head Furhat in a situated spoken dialogue system was seen by almost 8 000 visitors, we collected a database of 10 000 utterances spoken to Furhat in situated interaction. The data collection is an example of a particular kind of corpus collection of human-machine dialogues in public spaces that has several interesting and specific characteristics, both with respect to the technical details of the collection and with respect to the resulting corpus contents. In this paper, we take the Furhat data collection as a starting point for a discussion of the motives for this type of data collection, its technical peculiarities and prerequisites, and the characteristics of the resulting corpus.
  •  
5.
  • Al Moubayed, Samer, et al. (författare)
  • Human-robot Collaborative Tutoring Using Multiparty Multimodal Spoken Dialogue
  • 2014
  • Konferensbidrag (refereegranskat)abstract
    • In this paper, we describe a project that explores a novel experi-mental setup towards building a spoken, multi-modally rich, and human-like multiparty tutoring robot. A human-robotinteraction setup is designed, and a human-human dialogue corpus is collect-ed. The corpus targets the development of a dialogue system platform to study verbal and nonverbaltutoring strategies in mul-tiparty spoken interactions with robots which are capable of spo-ken dialogue. The dialogue task is centered on two participants involved in a dialogueaiming to solve a card-ordering game. Along with the participants sits a tutor (robot) that helps the par-ticipants perform the task, and organizes and balances their inter-action. Differentmultimodal signals captured and auto-synchronized by different audio-visual capture technologies, such as a microphone array, Kinects, and video cameras, were coupled with manual annotations. These are used build a situated model of the interaction based on the participants personalities, their state of attention, their conversational engagement and verbal domi-nance, and how that is correlated with the verbal and visual feed-back, turn-management, and conversation regulatory actions gen-erated by the tutor. Driven by the analysis of the corpus, we will show also the detailed design methodologies for an affective, and multimodally rich dialogue system that allows the robot to meas-ure incrementally the attention states, and the dominance for each participant, allowing the robot head Furhat to maintain a well-coordinated, balanced, and engaging conversation, that attempts to maximize the agreement and the contribution to solve the task. This project sets the first steps to explore the potential of us-ing multimodal dialogue systems to build interactive robots that can serve in educational, team building, and collaborative task solving applications.
  •  
6.
  • Al Moubayed, Samer, et al. (författare)
  • Lip-reading : Furhat audio visual intelligibility of a back projected animated face
  • 2012
  • Ingår i: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). - Berlin, Heidelberg : Springer Berlin/Heidelberg. ; , s. 196-203
  • Konferensbidrag (refereegranskat)abstract
    • Back projecting a computer animated face, onto a three dimensional static physical model of a face, is a promising technology that is gaining ground as a solution to building situated, flexible and human-like robot heads. In this paper, we first briefly describe Furhat, a back projected robot head built for the purpose of multimodal multiparty human-machine interaction, and its benefits over virtual characters and robotic heads; and then motivate the need to investigating the contribution to speech intelligibility Furhat's face offers. We present an audio-visual speech intelligibility experiment, in which 10 subjects listened to short sentences with degraded speech signal. The experiment compares the gain in intelligibility between lip reading a face visualized on a 2D screen compared to a 3D back-projected face and from different viewing angles. The results show that the audio-visual speech intelligibility holds when the avatar is projected onto a static face model (in the case of Furhat), and even, rather surprisingly, exceeds it. This means that despite the movement limitations back projected animated face models bring about; their audio visual speech intelligibility is equal, or even higher, compared to the same models shown on flat displays. At the end of the paper we discuss several hypotheses on how to interpret the results, and motivate future investigations to better explore the characteristics of visual speech perception 3D projected faces.
  •  
7.
  • Al Moubayed, Samer, et al. (författare)
  • Multimodal Multiparty Social Interaction with the Furhat Head
  • 2012
  • Konferensbidrag (refereegranskat)abstract
    • We will show in this demonstrator an advanced multimodal and multiparty spoken conversational system using Furhat, a robot head based on projected facial animation. Furhat is a human-like interface that utilizes facial animation for physical robot heads using back-projection. In the system, multimodality is enabled using speech and rich visual input signals such as multi-person real-time face tracking and microphone tracking. The demonstrator will showcase a system that is able to carry out social dialogue with multiple interlocutors simultaneously with rich output signals such as eye and head coordination, lips synchronized speech synthesis, and non-verbal facial gestures used to regulate fluent and expressive multiparty conversations.
  •  
8.
  • Al Moubayed, Samer, 1982-, et al. (författare)
  • Perception of Gaze Direction for Situated Interaction
  • 2012
  • Ingår i: Proceedings of the 4th Workshop on Eye Gaze in Intelligent Human Machine Interaction, Gaze-In 2012. - New York, NY, USA : ACM. - 9781450315166
  • Konferensbidrag (refereegranskat)abstract
    • Accurate human perception of robots' gaze direction is crucial for the design of a natural and fluent situated multimodal face-to-face interaction between humans and machines. In this paper, we present an experiment targeted at quantifying the effects of different gaze cues synthesized using the Furhat back-projected robot head, on the accuracy of perceived spatial direction of gaze by humans using 18 test subjects. The study first quantifies the accuracy of the perceived gaze direction in a human-human setup, and compares that to the use of synthesized gaze movements in different conditions: viewing the robot eyes frontal or at a 45 degrees angle side view. We also study the effect of 3D gaze by controlling both eyes to indicate the depth of the focal point (vergence), the use of gaze or head pose, and the use of static or dynamic eyelids. The findings of the study are highly relevant to the design and control of robots and animated agents in situated face-to-face interaction.
  •  
9.
  • Al Moubayed, Samer, et al. (författare)
  • Spontaneous spoken dialogues with the Furhat human-like robot head
  • 2014
  • Ingår i: HRI '14 Proceedings of the 2014 ACM/IEEE international conference on Human-robot interaction. - Bielefeld, Germany : ACM. ; , s. 326-
  • Konferensbidrag (refereegranskat)abstract
    • We will show in this demonstrator an advanced multimodal and multiparty spoken conversational system using Furhat, a robot head based on projected facial animation. Furhat is an anthropomorphic robot head that utilizes facial animation for physical robot heads using back-projection. In the system, multimodality is enabled using speech and rich visual input signals such as multi-person real-time face tracking and microphone tracking. The demonstrator will showcase a system that is able to carry out social dialogue with multiple interlocutors simultaneously with rich output signals such as eye and head coordination, lips synchronized speech synthesis, and non-verbal facial gestures used to regulate fluent and expressive multiparty conversations. The dialogue design is performed using the IrisTK [4] dialogue authoring toolkit developed at KTH. The system will also be able to perform a moderator in a quiz-game showing different strategies for regulating spoken situated interactions.
  •  
10.
  • Al Moubayed, Samer, et al. (författare)
  • Talking with Furhat - multi-party interaction with a back-projected robot head
  • 2012
  • Ingår i: Proceedings of Fonetik 2012. - Gothenberg, Sweden. ; , s. 109-112
  • Konferensbidrag (övrigt vetenskapligt/konstnärligt)abstract
    • This is a condensed presentation of some recent work on a back-projected robotic head for multi-party interaction in public settings. We will describe some of the design strategies and give some preliminary analysis of an interaction database collected at the Robotville exhibition at the London Science Museum
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-10 av 142
Typ av publikation
konferensbidrag (112)
tidskriftsartikel (19)
doktorsavhandling (6)
bokkapitel (3)
forskningsöversikt (2)
Typ av innehåll
refereegranskat (128)
övrigt vetenskapligt/konstnärligt (13)
populärvet., debatt m.m. (1)
Författare/redaktör
Skantze, Gabriel (78)
Skantze, Gabriel, 19 ... (59)
Gustafson, Joakim (29)
Beskow, Jonas (21)
Al Moubayed, Samer (15)
Edlund, Jens (14)
visa fler...
Meena, Raveesh (11)
Granström, Björn (10)
Johansson, Martin (9)
Hjalmarsson, Anna (8)
Oertel, Catharine (7)
Carlson, Rolf (7)
Peters, Christopher (6)
Axelsson, Agnes, 199 ... (6)
House, David (5)
Yang, Fangkai (4)
Li, Chengjie (4)
Schlangen, D. (4)
Stefanov, Kalin (3)
Al Moubayed, Samer, ... (3)
Koutsombogera, M. (3)
Avramova, Vanya (3)
Skantze, Gabriel, Pr ... (3)
Boye, Johan (3)
Willemsen, Bram (3)
Kragic, Danica (2)
Székely, Eva (2)
Esposito, A (2)
Gao, Alex Yuan (2)
Tscheligi, Manfred (2)
Bollepalli, Bajibabu (2)
Hussen-Abdelaziz, A. (2)
Novikova, J. (2)
Varol, G. (2)
Blomberg, Mats (2)
Lopes, J. (2)
Heylen, D. (2)
Bohus, D. (2)
Papageorgiou, H. (2)
Traum, David (2)
Axelsson, Nils, 1992 ... (2)
Romeo, Marta (2)
Bohg, Jeannette, 198 ... (2)
Shore, Todd (2)
Gustafsson, Joakim (2)
Heldner, Mattias (2)
Johnson-Roberson, Ma ... (2)
Elgarf, Maha (2)
David Lopes, José (2)
Gustafsson, Joakim, ... (2)
visa färre...
Lärosäte
Kungliga Tekniska Högskolan (141)
Uppsala universitet (3)
Göteborgs universitet (1)
Stockholms universitet (1)
Mälardalens universitet (1)
Språk
Engelska (142)
Forskningsämne (UKÄ/SCB)
Naturvetenskap (128)
Teknik (14)
Humaniora (5)

År

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy