SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Gustafson Joakim) "

Sökning: WFRF:(Gustafson Joakim)

  • Resultat 1-10 av 153
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Abelho Pereira, André Tiago, et al. (författare)
  • Effects of Different Interaction Contexts when Evaluating Gaze Models in HRI
  • 2020
  • Konferensbidrag (refereegranskat)abstract
    • We previously introduced a responsive joint attention system that uses multimodal information from users engaged in a spatial reasoning task with a robot and communicates joint attention via the robot's gaze behavior [25]. An initial evaluation of our system with adults showed it to improve users' perceptions of the robot's social presence. To investigate the repeatability of our prior findings across settings and populations, here we conducted two further studies employing the same gaze system with the same robot and task but in different contexts: evaluation of the system with external observers and evaluation with children. The external observer study suggests that third-person perspectives over videos of gaze manipulations can be used either as a manipulation check before committing to costly real-time experiments or to further establish previous findings. However, the replication of our original adults study with children in school did not confirm the effectiveness of our gaze manipulation, suggesting that different interaction contexts can affect the generalizability of results in human-robot interaction gaze studies.
  •  
2.
  • Abelho Pereira, André Tiago, et al. (författare)
  • Responsive Joint Attention in Human-Robot Interaction
  • 2019
  • Ingår i: Proceedings 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2019. - : Institute of Electrical and Electronics Engineers (IEEE). ; , s. 1080-1087
  • Konferensbidrag (refereegranskat)abstract
    • Joint attention has been shown to be not only crucial for human-human interaction but also human-robot interaction. Joint attention can help to make cooperation more efficient, support disambiguation in instances of uncertainty and make interactions appear more natural and familiar. In this paper, we present an autonomous gaze system that uses multimodal perception capabilities to model responsive joint attention mechanisms. We investigate the effects of our system on people’s perception of a robot within a problem-solving task. Results from a user study suggest that responsive joint attention mechanisms evoke higher perceived feelings of social presence on scales that regard the direction of the robot’s perception.
  •  
3.
  • Al Moubayed, Samer, et al. (författare)
  • Analysis of gaze and speech patterns in three-party quiz game interaction
  • 2013
  • Ingår i: Interspeech 2013. - : The International Speech Communication Association (ISCA). ; , s. 1126-1130
  • Konferensbidrag (refereegranskat)abstract
    • In order to understand and model the dynamics between interaction phenomena such as gaze and speech in face-to-face multiparty interaction between humans, we need large quantities of reliable, objective data of such interactions. To date, this type of data is in short supply. We present a data collection setup using automated, objective techniques in which we capture the gaze and speech patterns of triads deeply engaged in a high-stakes quiz game. The resulting corpus consists of five one-hour recordings, and is unique in that it makes use of three state-of-the-art gaze trackers (one per subject) in combination with a state-of-theart conical microphone array designed to capture roundtable meetings. Several video channels are also included. In this paper we present the obstacles we encountered and the possibilities afforded by a synchronised, reliable combination of large-scale multi-party speech and gaze data, and an overview of the first analyses of the data. Index Terms: multimodal corpus, multiparty dialogue, gaze patterns, multiparty gaze.
  •  
4.
  • Al Moubayed, Samer, et al. (författare)
  • Furhat goes to Robotville: a large-scale multiparty human-robot interaction data collection in a public space
  • 2012
  • Ingår i: Proc of LREC Workshop on Multimodal Corpora. - Istanbul, Turkey.
  • Konferensbidrag (refereegranskat)abstract
    • In the four days of the Robotville exhibition at the London Science Museum, UK, during which the back-projected head Furhat in a situated spoken dialogue system was seen by almost 8 000 visitors, we collected a database of 10 000 utterances spoken to Furhat in situated interaction. The data collection is an example of a particular kind of corpus collection of human-machine dialogues in public spaces that has several interesting and specific characteristics, both with respect to the technical details of the collection and with respect to the resulting corpus contents. In this paper, we take the Furhat data collection as a starting point for a discussion of the motives for this type of data collection, its technical peculiarities and prerequisites, and the characteristics of the resulting corpus.
  •  
5.
  • Al Moubayed, Samer, et al. (författare)
  • Human-robot Collaborative Tutoring Using Multiparty Multimodal Spoken Dialogue
  • 2014
  • Konferensbidrag (refereegranskat)abstract
    • In this paper, we describe a project that explores a novel experi-mental setup towards building a spoken, multi-modally rich, and human-like multiparty tutoring robot. A human-robotinteraction setup is designed, and a human-human dialogue corpus is collect-ed. The corpus targets the development of a dialogue system platform to study verbal and nonverbaltutoring strategies in mul-tiparty spoken interactions with robots which are capable of spo-ken dialogue. The dialogue task is centered on two participants involved in a dialogueaiming to solve a card-ordering game. Along with the participants sits a tutor (robot) that helps the par-ticipants perform the task, and organizes and balances their inter-action. Differentmultimodal signals captured and auto-synchronized by different audio-visual capture technologies, such as a microphone array, Kinects, and video cameras, were coupled with manual annotations. These are used build a situated model of the interaction based on the participants personalities, their state of attention, their conversational engagement and verbal domi-nance, and how that is correlated with the verbal and visual feed-back, turn-management, and conversation regulatory actions gen-erated by the tutor. Driven by the analysis of the corpus, we will show also the detailed design methodologies for an affective, and multimodally rich dialogue system that allows the robot to meas-ure incrementally the attention states, and the dominance for each participant, allowing the robot head Furhat to maintain a well-coordinated, balanced, and engaging conversation, that attempts to maximize the agreement and the contribution to solve the task. This project sets the first steps to explore the potential of us-ing multimodal dialogue systems to build interactive robots that can serve in educational, team building, and collaborative task solving applications.
  •  
6.
  • Al Moubayed, Samer, et al. (författare)
  • Multimodal Multiparty Social Interaction with the Furhat Head
  • 2012
  • Konferensbidrag (refereegranskat)abstract
    • We will show in this demonstrator an advanced multimodal and multiparty spoken conversational system using Furhat, a robot head based on projected facial animation. Furhat is a human-like interface that utilizes facial animation for physical robot heads using back-projection. In the system, multimodality is enabled using speech and rich visual input signals such as multi-person real-time face tracking and microphone tracking. The demonstrator will showcase a system that is able to carry out social dialogue with multiple interlocutors simultaneously with rich output signals such as eye and head coordination, lips synchronized speech synthesis, and non-verbal facial gestures used to regulate fluent and expressive multiparty conversations.
  •  
7.
  • Al Moubayed, Samer, et al. (författare)
  • Talking with Furhat - multi-party interaction with a back-projected robot head
  • 2012
  • Ingår i: Proceedings of Fonetik 2012. - Gothenberg, Sweden. ; , s. 109-112
  • Konferensbidrag (övrigt vetenskapligt/konstnärligt)abstract
    • This is a condensed presentation of some recent work on a back-projected robotic head for multi-party interaction in public settings. We will describe some of the design strategies and give some preliminary analysis of an interaction database collected at the Robotville exhibition at the London Science Museum
  •  
8.
  • Bell, Linda, et al. (författare)
  • A Comparison of Disfluency Distribution in a Unimodal and a Multimodal Speech Interface
  • 2000
  • Ingår i: Proceedings of ICSLP 00.
  • Konferensbidrag (refereegranskat)abstract
    • In this paper, we compare the distribution of disfluencies in two human--computer dialogue corpora. One corpus consists of unimodal travel booking dialogues, which were recorded over the telephone. In this unimodal system, all components except the speech recognition were authentic. The other corpus was collected using a semi-simulated multi-modal dialogue system with an animated talking agent and a clickable map. The aim of this paper is to analyze and discuss the effects of modality, task and interface design on the distribution and frequency of disfluencies in these two corpora.
  •  
9.
  • Bell, Linda, et al. (författare)
  • Children’s convergence in referring expressions to graphical objects in a speech-enabled computer game
  • 2007
  • Ingår i: 8th Annual Conference of the International Speech Communication Association. - Antwerp, Belgium. - 9781605603162 ; , s. 2788-2791
  • Konferensbidrag (refereegranskat)abstract
    • This paper describes an empirical study of children's spontaneous interactions with an animated character in a speech-enabled computer game. More specifically, it deals with convergence of referring expressions. 49 children were invited to play the game, which was initiated by a collaborative "put-that-there" task. In order to solve this task, the children had to refer to both physical objects and icons in a 3D environment. For physical objects, which were mostly referred to using straight-forward noun phrases, lexical convergence took place in 90% of all cases. In the case of the icons, the children were more innovative and spontaneously referred to them in many different ways. Even after being prompted by the system, lexical convergence took place for only 50% of the icons. In the cases where convergence did take place, the effect of the system's prompts were quite local, and the children quickly resorted to their original way of referring when naming new icons in later tasks.
  •  
10.
  • Bell, Linda, et al. (författare)
  • Modality Convergence in a Multimodal Dialogue System
  • 2000
  • Ingår i: Proceedings of Götalog. ; , s. 29-34
  • Konferensbidrag (refereegranskat)abstract
    • When designing multimodal dialogue systems allowing speech as well as graphical operations, it is important to understand not only how people make use of the different modalities in their utterances, but also how the system might influence a user’s choice of modality by its own behavior. This paper describes an experiment in which subjects interacted with two versions of a simulated multimodal dialogue system. One version used predominantly graphical means when referring to specific objects; the other used predominantly verbal referential expressions. The purpose of the study was to find out what effect, if any, the system’s referential strategy had on the user’s behavior. The results provided limited support for the hypothesis that the system can influence users to adopt another modality for the purpose of referring
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-10 av 153
Typ av publikation
konferensbidrag (122)
tidskriftsartikel (12)
bokkapitel (11)
doktorsavhandling (5)
rapport (1)
proceedings (redaktörskap) (1)
visa fler...
annan publikation (1)
visa färre...
Typ av innehåll
refereegranskat (131)
övrigt vetenskapligt/konstnärligt (19)
populärvet., debatt m.m. (3)
Författare/redaktör
Gustafson, Joakim (126)
Beskow, Jonas (37)
Edlund, Jens (28)
Skantze, Gabriel (26)
Oertel, Catharine (17)
Boye, Johan (14)
visa fler...
Granström, Björn (13)
Bell, Linda (12)
Székely, Eva (11)
Abelho Pereira, Andr ... (9)
Al Moubayed, Samer (9)
Neiberg, Daniel (8)
Heldner, Mattias (8)
Wirén, Mats, 1954- (7)
Heldner, Mattias, 19 ... (6)
House, David (5)
Carlson, Rolf (5)
Malisz, Zofia (5)
Henter, Gustav Eje, ... (4)
Edlund, Jens, 1967- (4)
Bruce, Gösta (3)
Salvi, Giampiero (3)
Black, A (3)
Johansson, Martin (3)
Skantze, Gabriel, 19 ... (3)
Nivre, Joakim (3)
Bollepalli, Bajibabu (3)
Leite, Iolanda (3)
Schötz, Susanne (3)
Hjalmarsson, Anna (3)
Gustafson, Joakim, 1 ... (3)
Wagner, Petra (3)
Włodarczak, Marcin, ... (2)
Strangert, Eva (2)
Yu, Y (2)
Kragic, Danica, 1971 ... (2)
Fermoselle, Leonor (2)
Mendelson, Joe (2)
Fredriksson, M (2)
Hagman, Göran (2)
Kivipelto, Miia (2)
Stefanov, Kalin (2)
Tscheligi, Manfred (2)
Blomberg, Mats (2)
Megyesi, Beata (2)
Kucherenko, Taras, 1 ... (2)
Samuelsson, Joakim, ... (2)
Lindström, Anders (2)
Pettersson, Eva (2)
Dahlqvist, Bengt (2)
visa färre...
Lärosäte
Kungliga Tekniska Högskolan (132)
Stockholms universitet (18)
Linköpings universitet (5)
Uppsala universitet (4)
Umeå universitet (3)
Lunds universitet (3)
visa fler...
Linnéuniversitetet (2)
Göteborgs universitet (1)
Mälardalens universitet (1)
visa färre...
Språk
Engelska (153)
Forskningsämne (UKÄ/SCB)
Naturvetenskap (108)
Humaniora (17)
Teknik (14)
Samhällsvetenskap (6)
Medicin och hälsovetenskap (3)

År

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy