SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(House David) "

Sökning: WFRF:(House David)

  • Resultat 1-10 av 159
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Dennis, Martin, et al. (författare)
  • Effects of fluoxetine on functional outcomes after acute stroke (FOCUS) : a pragmatic, double-blind, randomised, controlled trial
  • 2019
  • Ingår i: The Lancet. - 0140-6736 .- 1474-547X. ; 393:10168, s. 265-274
  • Tidskriftsartikel (refereegranskat)abstract
    • Background Results of small trials indicate that fluoxetine might improve functional outcomes after stroke. The FOCUS trial aimed to provide a precise estimate of these effects.Methods FOCUS was a pragmatic, multicentre, parallel group, double-blind, randomised, placebo-controlled trial done at 103 hospitals in the UK. Patients were eligible if they were aged 18 years or older, had a clinical stroke diagnosis, were enrolled and randomly assigned between 2 days and 15 days after onset, and had focal neurological deficits. Patients were randomly allocated fluoxetine 20 mg or matching placebo orally once daily for 6 months via a web-based system by use of a minimisation algorithm. The primary outcome was functional status, measured with the modified Rankin Scale (mRS), at 6 months. Patients, carers, health-care staff, and the trial team were masked to treatment allocation. Functional status was assessed at 6 months and 12 months after randomisation. Patients were analysed according to their treatment allocation. This trial is registered with the ISRCTN registry, number ISRCTN83290762.Findings Between Sept 10,2012, and March 31,2017,3127 patients were recruited. 1564 patients were allocated fluoxetine and 1563 allocated placebo. mRS data at 6 months were available for 1553 (99.3%) patients in each treatment group. The distribution across mRS categories at 6 months was similar in the fluoxetine and placebo groups (common odds ratio adjusted for minimisation variables 0.951 [95% CI 0.839-1.079]; p=0.439). Patients allocated fluoxetine were less likely than those allocated placebo to develop new depression by 6 months (210 [13.43%] patients vs 269 [17.21%]; difference 3.78% [95% CI 1.26-6.30]; p=0.0033), but they had more bone fractures (45 [2.88%] vs 23 [1.47%]; difference 1.41% [95% CI 0.38-2.43]; p=0.0070). There were no significant differences in any other event at 6 or 12 months.Interpretation Fluoxetine 20 mg given daily for 6 months after acute stroke does not seem to improve functional outcomes. Although the treatment reduced the occurrence of depression, it increased the frequency of bone fractures. These results do not support the routine use of fluoxetine either for the prevention of post-stroke depression or to promote recovery of function.
  •  
2.
  • Al Moubayed, Samer, et al. (författare)
  • Animated Faces for Robotic Heads : Gaze and Beyond
  • 2011
  • Ingår i: Analysis of Verbal and Nonverbal Communication and Enactment. - Berlin, Heidelberg : Springer Berlin/Heidelberg. - 9783642257742 ; , s. 19-35
  • Konferensbidrag (refereegranskat)abstract
    • We introduce an approach to using animated faces for robotics where a static physical object is used as a projection surface for an animation. The talking head is projected onto a 3D physical head model. In this chapter we discuss the different benefits this approach adds over mechanical heads. After that, we investigate a phenomenon commonly referred to as the Mona Lisa gaze effect. This effect results from the use of 2D surfaces to display 3D images and causes the gaze of a portrait to seemingly follow the observer no matter where it is viewed from. The experiment investigates the perception of gaze direction by observers. The analysis shows that the 3D model eliminates the effect, and provides an accurate perception of gaze direction. We discuss at the end the different requirements of gaze in interactive systems, and explore the different settings these findings give access to.
  •  
3.
  • Al Moubayed, Samer, et al. (författare)
  • Audio-Visual Prosody : Perception, Detection, and Synthesis of Prominence
  • 2010
  • Ingår i: 3rd COST 2102 International Training School on Toward Autonomous, Adaptive, and Context-Aware Multimodal Interfaces. - Berlin, Heidelberg : Springer Berlin Heidelberg. - 9783642181832 ; , s. 55-71
  • Konferensbidrag (refereegranskat)abstract
    • In this chapter, we investigate the effects of facial prominence cues, in terms of gestures, when synthesized on animated talking heads. In the first study a speech intelligibility experiment is conducted, where speech quality is acoustically degraded, then the speech is presented to 12 subjects through a lip synchronized talking head carrying head-nods and eyebrow raising gestures. The experiment shows that perceiving visual prominence as gestures, synchronized with the auditory prominence, significantly increases speech intelligibility compared to when these gestures are randomly added to speech. We also present a study examining the perception of the behavior of the talking heads when gestures are added at pitch movements. Using eye-gaze tracking technology and questionnaires for 10 moderately hearing impaired subjects, the results of the gaze data show that users look at the face in a similar fashion to when they look at a natural face when gestures are coupled with pitch movements opposed to when the face carries no gestures. From the questionnaires, the results also show that these gestures significantly increase the naturalness and helpfulness of the talking head.
  •  
4.
  • Alexanderson, Simon, et al. (författare)
  • Aspects of co-occurring syllables and head nods in spontaneous dialogue
  • 2013
  • Ingår i: Proceedings of 12th International Conference on Auditory-Visual Speech Processing (AVSP2013). - : The International Society for Computers and Their Applications (ISCA). ; , s. 169-172
  • Konferensbidrag (refereegranskat)abstract
    • This paper reports on the extraction and analysis of head nods taken from motion capture data of spontaneous dialogue in Swedish. The head nods were extracted automatically and then manually classified in terms of gestures having a beat function or multifunctional gestures. Prosodic features were extracted from syllables co-occurring with the beat gestures. While the peak rotation of the nod is on average aligned with the stressed syllable, the results show considerable variation in fine temporal synchronization. The syllables co-occurring with the gestures generally show greater intensity, higher F0, and greater F0 range when compared to the mean across the entire dialogue. A functional analysis shows that the majority of the syllables belong to words bearing a focal accent.
  •  
5.
  • Alexanderson, Simon, et al. (författare)
  • Automatic annotation of gestural units in spontaneous face-to-face interaction
  • 2016
  • Ingår i: MA3HMI 2016 - Proceedings of the Workshop on Multimodal Analyses Enabling Artificial Agents in Human-Machine Interaction. - New York, NY, USA : ACM. - 9781450345620 ; , s. 15-19
  • Konferensbidrag (refereegranskat)abstract
    • Speech and gesture co-occur in spontaneous dialogue in a highly complex fashion. There is a large variability in the motion that people exhibit during a dialogue, and different kinds of motion occur during different states of the interaction. A wide range of multimodal interface applications, for example in the fields of virtual agents or social robots, can be envisioned where it is important to be able to automatically identify gestures that carry information and discriminate them from other types of motion. While it is easy for a human to distinguish and segment manual gestures from a flow of multimodal information, the same task is not trivial to perform for a machine. In this paper we present a method to automatically segment and label gestural units from a stream of 3D motion capture data. The gestural flow is modeled with a 2-level Hierarchical Hidden Markov Model (HHMM) where the sub-states correspond to gesture phases. The model is trained based on labels of complete gesture units and self-adaptive manipulators. The model is tested and validated on two datasets differing in genre and in method of capturing motion, and outperforms a state-of-the-art SVM classifier on a publicly available dataset.
  •  
6.
  •  
7.
  • Alexanderson, Simon, et al. (författare)
  • Extracting and analyzing head movements accompanying spontaneous dialogue
  • 2013
  • Ingår i: Conference Proceedings TiGeR 2013.
  • Konferensbidrag (refereegranskat)abstract
    • This paper reports on a method developed for extracting and analyzing head gestures taken from motion capture data of spontaneous dialogue in Swedish. Candidate head gestures with beat function were extracted automatically and then manually classified using a 3D player which displays timesynced audio and 3D point data of the motion capture markers together with animated characters. Prosodic features were extracted from syllables co-occurring with a subset of the classified gestures. The beat gestures show considerable variation in temporal synchronization with the syllables, while the syllables generally show greater intensity, higher F0, and greater F0 range when compared to the mean across the entire dialogue. Additional features for further analysis and automatic classification of the head gestures are discussed.
  •  
8.
  • Ambrazaitis, Gilbert, 1979-, et al. (författare)
  • Accentual falls and rises vary as a function of accompanying head and eyebrow movements
  • 2018
  • Ingår i: Proceedings FONETIK 2018. - Gothenburg : University of Gothenburg. ; , s. 5-7
  • Konferensbidrag (övrigt vetenskapligt/konstnärligt)abstract
    • In this study we examine prosodic prominence from a multimodal perspective. Our research question is whether the phonetic realization of accentual falls and rises in Swedish complex pitch accents varies as a function of accompanying head and eyebrow movements. The study is based on audio and video data from 60 brief news readings from Swedish Television (SVT Rapport), comprising 1936 words in total, or about 12 minutes of speech from five news anchors (two female, three male). The results suggest a tendency for a cumulative relation of verbal and visual prominence cues: the more visual cues accompanying, the higher the pitch peaks and the larger the rises and falls.
  •  
9.
  • Ambrazaitis, Gilbert, et al. (författare)
  • Acoustic features of multimodal prominences : Do visual beat gestures affect verbal pitch accent realization?
  • 2017
  • Ingår i: Proceedings of The 14th International Conference on Auditory-Visual Speech Processing (AVSP2017). - Stockholm : International Speech Communication Association. - 2308-975X.
  • Konferensbidrag (refereegranskat)abstract
    • The interplay of verbal and visual prominence cues has attracted recent attention, but previous findings are inconclusive as to whether and how the two modalities are integrated in the production and perception of prominence. In particular, we do not know whether the phonetic realization of pitch accents is influenced by co-speech beat gestures, and previous findings seem to generate different predictions. In this study, we investigate acoustic properties ofprominent words as a function of visual beat gestures in a corpus of read news from Swedish television. The corpus was annotated for head and eyebrow beats as well as sentence-level pitch accents. Four types of prominence cues occurredparticularly frequently in the corpus: (1) pitch accent only, (2) pitch accent plus head, (3) pitch accent plus head plus eyebrows, and (4) head only. The results show that (4) differs from (1-3) in terms of a smaller pitch excursion and shorter syllable duration. They also reveal significantly larger pitch excursions in (2) than in (1), suggesting that the realization of a pitch accent is to some extent influenced by the presence of visual prominence cues. Results are discussed in terms of the interaction between beat gestures and prosody with a potential functional difference between head and eyebrow beats.
  •  
10.
  • Ambrazaitis, Gilbert, 1979-, et al. (författare)
  • Auditory vs. audiovisual prominence ratings of speech involving spontaneously produced head movements
  • 2022
  • Ingår i: Proceedings of the 11th International Conference on Speech Prosody : Speech Prosody 2022 - Speech Prosody 2022. - : International Speech Communication Association. - 2333-2042. ; , s. 352-356, s. 352-356
  • Konferensbidrag (refereegranskat)abstract
    • Visual information can be integrated in prominence perception, but most available evidence stems from controlled experimental settings, often involving synthetic stimuli. The present study provides evidence from spontaneously produced head gestures that occurred in Swedish television news readings. Sixteen short clips (containing 218 words in total) were rated for word prominence by 85 adult volunteers in a between-subjects design (44 in an audio-visual vs. 41 in an audio-only condition) using a web-based rating task. As an initial test of overall rating behavior, average prominence across all 218 words was compared between the two conditions, revealing no significant difference. In a second step, we compared normalized prominence ratings between the two conditions for all 218 words individually. These results displayed significant (or near significant, p
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-10 av 159
Typ av publikation
konferensbidrag (120)
tidskriftsartikel (21)
bokkapitel (9)
doktorsavhandling (6)
proceedings (redaktörskap) (1)
annan publikation (1)
visa fler...
licentiatavhandling (1)
visa färre...
Typ av innehåll
refereegranskat (123)
övrigt vetenskapligt/konstnärligt (34)
populärvet., debatt m.m. (2)
Författare/redaktör
House, David (152)
Granström, Björn (28)
Beskow, Jonas (26)
Ambrazaitis, Gilbert ... (25)
Edlund, Jens (24)
Karlsson, Anastasia (20)
visa fler...
Frid, Johan (17)
Strömbergsson, Sofia (13)
Horne, Merle (12)
Bruce, Gösta (9)
Svensson Lundmark, M ... (9)
Gustafson, Kjell (9)
Alexanderson, Simon (7)
Ambrazaitis, Gilbert (7)
Touati, Paul (7)
Elenius, Kjell (6)
Heldner, Mattias (6)
Skantze, Gabriel (5)
Hjalmarsson, Anna (5)
Carlson, Rolf (5)
Gustafson, Joakim (4)
Zellers, Margaret (4)
Hellmer, Kahl (4)
Salvi, Giampiero (3)
Al Moubayed, Samer (3)
Artman, Henrik, 1968 ... (3)
Hultén, Magnus, 1970 ... (3)
Nordstrand, Magnus (3)
Svanfeldt, Gunilla (3)
Edlund, Jens, Docent ... (3)
Schmitt, Thorsten (2)
Karlsson, A. (2)
Eriksson, Gunnar, 19 ... (2)
Domeij, Rickard, 195 ... (2)
Merkel, Magnus (2)
Massel, Felix (2)
Duda, Laurent (2)
Megyesi, Beata (2)
Cerrato, Loredana (2)
Strömqvist, Sven (2)
Ambrazaitis, G. (2)
Svensson Lundmark, M ... (2)
Schötz, Susanne (2)
Artman, Henrik (2)
Hulten, Magnus (2)
Fallgren, Per (2)
Lastow, Birgitta (2)
Nylund Skog, Susanne ... (2)
Öqvist, Jenny, 1969- (2)
Edlund, Jens, 1967- (2)
visa färre...
Lärosäte
Kungliga Tekniska Högskolan (99)
Lunds universitet (46)
Linnéuniversitetet (30)
Uppsala universitet (5)
Stockholms universitet (5)
Göteborgs universitet (3)
visa fler...
Linköpings universitet (3)
Karolinska Institutet (2)
Institutet för språk och folkminnen (2)
visa färre...
Språk
Engelska (159)
Forskningsämne (UKÄ/SCB)
Naturvetenskap (86)
Humaniora (81)
Samhällsvetenskap (7)
Teknik (6)
Medicin och hälsovetenskap (2)

År

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy