SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Granström Björn) "

Sökning: WFRF:(Granström Björn)

  • Resultat 1-10 av 92
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Carlson, Rolf, et al. (författare)
  • Gunnar Fant 1920-2009 In Memoriam
  • 2009
  • Ingår i: Phonetica. - : Walter de Gruyter GmbH. - 0031-8388 .- 1423-0321. ; 66:4, s. 249-250
  • Tidskriftsartikel (refereegranskat)
  •  
2.
  • Engstrand, Olle, et al. (författare)
  • In memoriam - Gösta Bruce (1947–2010)
  • 2010
  • Ingår i: Journal of the International Phonetic Association. - Cambridge : Cambridge University Press. - 0025-1003. ; 40:3, s. 379-381
  • Tidskriftsartikel (övrigt vetenskapligt/konstnärligt)
  •  
3.
  •  
4.
  • Agelfors, Eva, et al. (författare)
  • Synthetic visual speech driven from auditory speech
  • 1999
  • Ingår i: Proceedings of Audio-Visual Speech Processing (AVSP'99)).
  • Konferensbidrag (refereegranskat)abstract
    • We have developed two different methods for using auditory, telephone speech to drive the movements of a synthetic face. In the first method, Hidden Markov Models (HMMs) were trained on a phonetically transcribed telephone speech database. The output of the HMMs was then fed into a rulebased visual speech synthesizer as a string of phonemes together with time labels. In the second method, Artificial Neural Networks (ANNs) were trained on the same database to map acoustic parameters directly to facial control parameters. These target parameter trajectories were generated by using phoneme strings from a database as input to the visual speech synthesis The two methods were evaluated through audiovisual intelligibility tests with ten hearing impaired persons, and compared to “ideal” articulations (where no recognition was involved), a natural face, and to the intelligibility of the audio alone. It was found that the HMM method performs considerably better than the audio alone condition (54% and 34% keywords correct respectively), but not as well as the “ideal” articulating artificial face (64%). The intelligibility for the ANN method was 34% keywords correct.
  •  
5.
  •  
6.
  • Al Moubayed, Samer, et al. (författare)
  • A robotic head using projected animated faces
  • 2011
  • Ingår i: Proceedings of the International Conference on Audio-Visual Speech Processing 2011. - Stockholm : KTH Royal Institute of Technology. ; , s. 71-
  • Konferensbidrag (refereegranskat)abstract
    • This paper presents a setup which employs virtual animatedagents for robotic heads. The system uses a laser projector toproject animated faces onto a three dimensional face mask. This approach of projecting animated faces onto a three dimensional head surface as an alternative to using flat, two dimensional surfaces, eliminates several deteriorating effects and illusions that come with flat surfaces for interaction purposes, such as exclusive mutual gaze and situated and multi-partner dialogues. In addition to that, it provides robotic heads with a flexible solution for facial animation which takes into advantage the advancements of facial animation using computer graphics overmechanically controlled heads.
  •  
7.
  • Al Moubayed, Samer, et al. (författare)
  • Animated Faces for Robotic Heads : Gaze and Beyond
  • 2011
  • Ingår i: Analysis of Verbal and Nonverbal Communication and Enactment. - Berlin, Heidelberg : Springer Berlin/Heidelberg. - 9783642257742 ; , s. 19-35
  • Konferensbidrag (refereegranskat)abstract
    • We introduce an approach to using animated faces for robotics where a static physical object is used as a projection surface for an animation. The talking head is projected onto a 3D physical head model. In this chapter we discuss the different benefits this approach adds over mechanical heads. After that, we investigate a phenomenon commonly referred to as the Mona Lisa gaze effect. This effect results from the use of 2D surfaces to display 3D images and causes the gaze of a portrait to seemingly follow the observer no matter where it is viewed from. The experiment investigates the perception of gaze direction by observers. The analysis shows that the 3D model eliminates the effect, and provides an accurate perception of gaze direction. We discuss at the end the different requirements of gaze in interactive systems, and explore the different settings these findings give access to.
  •  
8.
  • Al Moubayed, Samer, et al. (författare)
  • Audio-Visual Prosody : Perception, Detection, and Synthesis of Prominence
  • 2010
  • Ingår i: 3rd COST 2102 International Training School on Toward Autonomous, Adaptive, and Context-Aware Multimodal Interfaces. - Berlin, Heidelberg : Springer Berlin Heidelberg. - 9783642181832 ; , s. 55-71
  • Konferensbidrag (refereegranskat)abstract
    • In this chapter, we investigate the effects of facial prominence cues, in terms of gestures, when synthesized on animated talking heads. In the first study a speech intelligibility experiment is conducted, where speech quality is acoustically degraded, then the speech is presented to 12 subjects through a lip synchronized talking head carrying head-nods and eyebrow raising gestures. The experiment shows that perceiving visual prominence as gestures, synchronized with the auditory prominence, significantly increases speech intelligibility compared to when these gestures are randomly added to speech. We also present a study examining the perception of the behavior of the talking heads when gestures are added at pitch movements. Using eye-gaze tracking technology and questionnaires for 10 moderately hearing impaired subjects, the results of the gaze data show that users look at the face in a similar fashion to when they look at a natural face when gestures are coupled with pitch movements opposed to when the face carries no gestures. From the questionnaires, the results also show that these gestures significantly increase the naturalness and helpfulness of the talking head.
  •  
9.
  • Al Moubayed, Samer, et al. (författare)
  • Auditory visual prominence From intelligibility to behavior
  • 2009
  • Ingår i: Journal on Multimodal User Interfaces. - : Springer Science and Business Media LLC. - 1783-7677 .- 1783-8738. ; 3:4, s. 299-309
  • Tidskriftsartikel (refereegranskat)abstract
    • Auditory prominence is defined as when an acoustic segment is made salient in its context. Prominence is one of the prosodic functions that has been shown to be strongly correlated with facial movements. In this work, we investigate the effects of facial prominence cues, in terms of gestures, when synthesized on animated talking heads. In the first study, a speech intelligibility experiment is conducted, speech quality is acoustically degraded and the fundamental frequency is removed from the signal, then the speech is presented to 12 subjects through a lip synchronized talking head carrying head-nods and eyebrows raise gestures, which are synchronized with the auditory prominence. The experiment shows that presenting prominence as facial gestures significantly increases speech intelligibility compared to when these gestures are randomly added to speech. We also present a follow-up study examining the perception of the behavior of the talking heads when gestures are added over pitch accents. Using eye-gaze tracking technology and questionnaires on 10 moderately hearing impaired subjects, the results of the gaze data show that users look at the face in a similar fashion to when they look at a natural face when gestures are coupled with pitch accents opposed to when the face carries no gestures. From the questionnaires, the results also show that these gestures significantly increase the naturalness and the understanding of the talking head.
  •  
10.
  • Al Moubayed, Samer, 1982-, et al. (författare)
  • Furhat : A Back-projected Human-like Robot Head for Multiparty Human-Machine Interaction
  • 2012
  • Ingår i: Cognitive Behavioural Systems. - Berlin, Heidelberg : Springer Berlin/Heidelberg. - 9783642345838 ; , s. 114-130
  • Konferensbidrag (refereegranskat)abstract
    • In this chapter, we first present a summary of findings from two previous studies on the limitations of using flat displays with embodied conversational agents (ECAs) in the contexts of face-to-face human-agent interaction. We then motivate the need for a three dimensional display of faces to guarantee accurate delivery of gaze and directional movements and present Furhat, a novel, simple, highly effective, and human-like back-projected robot head that utilizes computer animation to deliver facial movements, and is equipped with a pan-tilt neck. After presenting a detailed summary on why and how Furhat was built, we discuss the advantages of using optically projected animated agents for interaction. We discuss using such agents in terms of situatedness, environment, context awareness, and social, human-like face-to-face interaction with robots where subtle nonverbal and social facial signals can be communicated. At the end of the chapter, we present a recent application of Furhat as a multimodal multiparty interaction system that was presented at the London Science Museum as part of a robot festival,. We conclude the paper by discussing future developments, applications and opportunities of this technology.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-10 av 92
Typ av publikation
konferensbidrag (60)
tidskriftsartikel (12)
bokkapitel (11)
rapport (3)
doktorsavhandling (3)
licentiatavhandling (2)
visa fler...
forskningsöversikt (1)
visa färre...
Typ av innehåll
refereegranskat (67)
övrigt vetenskapligt/konstnärligt (23)
populärvet., debatt m.m. (2)
Författare/redaktör
Granström, Björn (85)
Beskow, Jonas (40)
House, David (28)
Bruce, Gösta (23)
Gustafson, Joakim (13)
Al Moubayed, Samer (12)
visa fler...
Skantze, Gabriel (10)
Salvi, Giampiero (7)
Edlund, Jens (6)
Frid, Johan (5)
Enflo, Laura (5)
Blomberg, Mats (5)
Agelfors, Eva (4)
Sundberg, Johan (3)
Spens, Karl-Erik (3)
Öhman, Tobias (3)
Strangert, Eva (2)
Botinis, A (2)
Friberg, Anders (2)
Lundeberg, Magnus (2)
Lindblom, Björn (2)
Al Moubayed, Samer, ... (2)
Tscheligi, Manfred (2)
Öster, Anne-Marie (2)
van Son, Nic (2)
Ormel, Ellen (2)
Herzke, Tobias (2)
Asnafi, Nader, 1960- (1)
Olsson, Håkan (1)
Aaltonen, Olli (1)
Engstrand, Olle (1)
Segerup, My (1)
Johansson, Christer (1)
Holmgren, Björn (1)
Nilsson, Björn (1)
Claesson, Ingvar (1)
Dahlquist, Martin (1)
Dahlquist, M (1)
Lundeberg, M (1)
Spens, K-E (1)
Karlsson, Inger (1)
Leisner, Peter (1)
Nordebo, Sven (1)
Alexanderson, Simon (1)
Mirning, Nicole (1)
Mirning, N. (1)
Öster, Ann-Marie (1)
Pettersson, Anders (1)
Megyesi, Beata (1)
Alexandersson, Anna (1)
visa färre...
Lärosäte
Kungliga Tekniska Högskolan (63)
Lunds universitet (23)
Uppsala universitet (2)
Stockholms universitet (2)
Umeå universitet (1)
Mälardalens universitet (1)
visa fler...
Örebro universitet (1)
RISE (1)
Karolinska Institutet (1)
Högskolan Dalarna (1)
Blekinge Tekniska Högskola (1)
Sveriges Lantbruksuniversitet (1)
visa färre...
Språk
Engelska (90)
Svenska (2)
Forskningsämne (UKÄ/SCB)
Naturvetenskap (55)
Humaniora (27)
Teknik (4)
Samhällsvetenskap (4)
Medicin och hälsovetenskap (1)
Lantbruksvetenskap (1)

År

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy