SwePub
Sök i SwePub databas

  Extended search

Träfflista för sökning "WFRF:(Nirme Jens) "

Search: WFRF:(Nirme Jens)

  • Result 1-10 of 19
Sort/group result
   
EnumerationReferenceCoverFind
1.
  • Anikin, Andrey, et al. (author)
  • Compensation for a large gesture-speech asynchrony in instructional videos
  • 2015
  • In: Gesture and Speech in Interaction - 4th edition (GESPIN 4). ; , s. 19-23
  • Conference paper (peer-reviewed)abstract
    • We investigated the pragmatic effects of gesture-speech lag by asking participants to reconstruct formations of geometric shapes based on instructional films in four conditions: sync, video or audio lag (±1,500 ms), audio only. All three video groups rated the task as less difficult compared to the audio-only group and performed better. The scores were slightly lower when sound preceded gestures (video lag), but not when gestures preceded sound (audio lag). Participants thus compensated for delays of 1.5 seconds in either direction, apparently without making a conscious effort. This greatly exceeds the previously reported time window for automatic multimodal integration.
  •  
2.
  • Carlie, Johanna, et al. (author)
  • Development of an Auditory Passage Comprehension Task for Swedish Primary School Children of Cultural and Linguistic Diversity
  • 2021
  • In: Journal of Speech, Language, and Hearing Research. - : American Speech-Language-Hearing Association. - 1558-9102 .- 1092-4388. ; 64:10
  • Journal article (peer-reviewed)abstract
    • Purpose This study reports on the development of an auditory passage comprehension task for Swedish primary school children of cultural and linguistic diversity. It also reports on their performance on the task in quiet and in noise. Method Eighty-eight children aged 7-9 years and showing normal hearing participated. The children were divided into three groups based on presumed language exposure: 13 children were categorized as Swedish-speaking monolinguals, 19 children were categorized as simultaneous bilinguals, and 56 children were categorized as sequential bilinguals. No significant difference in working memory capacity was seen between the three language groups. Two passages and associated multiple-choice questions were developed. During development of the passage comprehension task, steps were taken to reduce the impact of culture-specific prior experience and knowledge on performance. This was achieved by using the story grammar principles, universal topics and plots, and simple language that avoided complex or unusual grammatical structures and words. Results The findings indicate no significant difference between the two passages and similar response distributions. Passage comprehension performance was significantly better in quiet than in noise, regardless of language exposure group. The monolinguals outperformed both simultaneous and sequential bilinguals in both listening conditions. Conclusions Because the task was designed to minimize the effect of cultural knowledge on auditory passage comprehension, this suggests that compared with monolinguals, both simultaneous and sequential bilinguals have a disadvantage in auditory passage comprehension. As expected, the findings demonstrate that noise has a negative effect on auditory passage comprehension. The magnitude of this effect does not relate to language exposure. The developed auditory passage comprehension task seems suitable for assessing auditory passage comprehension in primary school children of linguistic and cultural diversity.
  •  
3.
  • Ekström, Axel G., et al. (author)
  • Motion iconicity in prosody
  • 2022
  • In: Frontiers in Communication. - : Frontiers Media SA. - 2297-900X. ; 7
  • Journal article (peer-reviewed)abstract
    • Evidence suggests that human non-verbal speech may be rich in iconicity. Here, we report results from two experiments aimed at testing whether perception of increasing and declining f(0) can be iconically mapped onto motion events. We presented a sample of mixed-nationality participants (N = 118) with sets of two videos, where one pictured upward movement and the other downward movement. A disyllabic non-sense word prosodically resynthesized as increasing or declining in f(0) was presented simultaneously with each video in a pair, and participants were tasked with guessing which of the two videos the word described. Results indicate that prosody is iconically associated with motion, such that motion-prosody congruent pairings were more readily selected than incongruent pairings (p < 0.033). However, the effect observed in our sample was primarily driven by selections of words with declining f(0). A follow-up experiment with native Turkish speaking participants (N = 92) tested for the effect of language-specific metaphor for auditory pitch. Results showed no significant association between prosody and motion. Limitations of the experiment, and some implications for the motor theory of speech perception, and "gestural origins" theories of language evolution, are discussed.
  •  
4.
  • Halder, Amitava, et al. (author)
  • Effects of leg fatigue due to exhaustive stair climbing on gait biomechanics while walking up a 10˚ incline – implications for evacuation and work safety
  • 2021
  • In: Fire Safety Journal. - : Elsevier BV. - 0379-7112.
  • Journal article (peer-reviewed)abstract
    • This biomechanics study explored stride length (SL), duration (SDN), and gait ground reaction forces (GRFspeak), required coefficient of friction (RCOFpeak), joint angle (anglepeak, anglemin), angular velocities (angvelx peak), angular accelerations (angaccx peak), muscle electromyography (EMG) during the dominant leg stance phase (SP) following an exhaustive stair ascent for evacuation. Data were collected by a three-dimensional motion capture system synchronized with EMG and force plate when walking upwards on a 10° inclined walkway.The significantly (p≤0.05) decreased EMG median frequencies of tibialis anterior during early (ES) and late stance (LS) phases, and vastus lateralis muscles during LS are the evidence of leg local muscle fatigue (LMF). The perpendicular and longitudinal shear GRFspeaks were significantly reduced during ES (p≤0.05) and LS (p≤0.01), respectively. The post-fatigue SP, SL, and SDN were significantly (p<0.05) shorter. Specially, the foot anglemins, ankle anglepeaks, and relevant angvelx peaks, and angaccx peaks significantly (p≤0.05) decreased in post-fatigue trials. The post-fatigue RCOFpeaks were found significantly (p≤0.01) lower during LS phase. Thus, whole body exhaustion and leg LMF constrained the gait kinetics and kinematics when walking upwards indicating a cautious gait associated with the risks of falls, accidents, which can hinder the process of evacuation and work safety on slopes.
  •  
5.
  •  
6.
  • Lingonblad, Martin, et al. (author)
  • Virtual Blindness - A Choice Blindness Experiment with a Virtual Experimenter
  • 2015
  • In: Intelligent Virtual Agents. - Cham : Springer International Publishing. - 0302-9743 .- 1611-3349. ; 9238, s. 442-451
  • Conference paper (peer-reviewed)abstract
    • How are people facing a virtual agent affected by the vividness and graphical fidelity of the agent and its environment? A choice blindness (CB) experiment - measuring detection rate of hidden manipulations - was conducted presenting a high versus low immersion virtual environment. The hypothesis was that the lower quality virtual environment (low immersion) would increase the detection rate for the CB manipulations. 38 subjects participated in the experiment and were randomized into two groups (high and low immersion). Both conditions presented a virtual agent conducting the CB experiment. During the experiment, 16 pairs of portraits were shown two at a time for the participants who were then asked to choose which portrait they found most attractive. For eight of the pairs, participants were asked to justify their choice while in four cases their choice had been secretly switched to the portrait they had not chosen. If a participant stated that the chosen portrait had been switched, it was annotated as a concurrent detection. The results revealed an increase in detection and earlier detection rate for the low immersion implementation compared to the high immersion implementation. Future research may involve experiments with higher degree of both immersion and presence, using for example head mounted display systems.
  •  
7.
  • Nirme, Jens, et al. (author)
  • A virtual speaker in noisy classroom conditions : supporting or disrupting children’s listening comprehension?
  • 2019
  • In: Logopedics Phoniatrics Vocology. - : Informa UK Limited. - 1401-5439 .- 1651-2022. ; 44:2, s. 79-86
  • Journal article (peer-reviewed)abstract
    • Aim: Seeing a speaker’s face facilitates speech recognition, particularly under noisy conditions. Evidence for how it might affect comprehension of the content of the speech is more sparse. We investigated how children’s listening comprehension is affected by multi-talker babble noise, with or without presentation of a digitally animated virtual speaker, and whether successful comprehension is related to performance on a test of executive functioning. Materials and Methods: We performed a mixed-design experiment with 55 (34 female) participants (8- to 9-year-olds), recruited from Swedish elementary schools. The children were presented with four different narratives, each in one of four conditions: audio-only presentation in a quiet setting, audio-only presentation in noisy setting, audio-visual presentation in a quiet setting, and audio-visual presentation in a noisy setting. After each narrative, the children answered questions on the content and rated their perceived listening effort. Finally, they performed a test of executive functioning. Results: We found significantly fewer correct answers to explicit content questions after listening in noise. This negative effect was only mitigated to a marginally significant degree by audio-visual presentation. Strong executive function only predicted more correct answers in quiet settings. Conclusions: Altogether, our results are inconclusive regarding how seeing a virtual speaker affects listening comprehension. We discuss how methodological adjustments, including modifications to our virtual speaker, can be used to discriminate between possible explanations to our results and contribute to understanding the listening conditions children face in a typical classroom.
  •  
8.
  • Nirme, Jens, et al. (author)
  • Audio-visual speech comprehension in noise with real and virtual speakers
  • 2020
  • In: Speech Communication. - : Elsevier BV. - 0167-6393. ; 116, s. 44-55
  • Journal article (peer-reviewed)abstract
    • This paper presents a study where a 3D motion-capture animated ‘virtual speaker’ is compared to a video of a real speaker with regards to how it facilitates children's speech comprehension of narratives in background multitalker babble noise. As secondary measures, children self-assess the listening- and attentional effort demanded by the task, and associates words describing positive or negative social traits to the speaker. The results show that the virtual speaker, despite being associated with more negative social traits, facilitates speech comprehension in babble noise compared to a voice-only presentation but that the effect requires some adaptation. We also found the virtual speaker to be at least as facilitating as the video. We interpret these results to suggest that audiovisual integration supports speech comprehension independently of children's social perception of the speaker, and discuss virtual speakers’ potential in research and pedagogical applications.
  •  
9.
  •  
10.
  • Nirme, Jens, et al. (author)
  • Early or synchronized gestures facilitate speech recall — a study based on motion capture data
  • 2024
  • In: Frontiers in Psychology. - 1664-1078. ; 15
  • Journal article (peer-reviewed)abstract
    • Introduction: Temporal co-ordination between speech and gestures has been thoroughly studied in natural production. In most cases gesture strokes precede or coincide with the stressed syllable in words that they are semantically associated with.Methods: To understand whether processing of speech and gestures is attuned to such temporal coordination, we investigated the effect of delaying, preposing or eliminating individual gestures on the memory for words in an experimental study in which 83 participants watched video sequences of naturalistic 3D-animated speakers generated based on motion capture data. A target word in the sequence appeared (a) with a gesture presented in its original position synchronized with speech, (b) temporally shifted 500 ms before or (c) after the original position, or (d) with the gesture eliminated. Participants were asked to retell the videos in a free recall task. The strength of recall was operationalized as the inclusion of the target word in the free recall.Results: Both eliminated and delayed gesture strokes resulted in reduced recall rates compared to synchronized strokes, whereas there was no difference between advanced (preposed) and synchronized strokes. An item-level analysis also showed that the greater the interval between the onsets of delayed strokes and stressed syllables in target words, the greater the negative effect was on recall.Discussion: These results indicate that speech-gesture synchrony affects memory for speech, and that temporal patterns that are common in production lead to the best recall. Importantly, the study also showcases a procedure for using motion capture-based 3D-animated speakers to create an experimental paradigm for the study of speech-gesture comprehension.
  •  
Skapa referenser, mejla, bekava och länka
  • Result 1-10 of 19

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Close

Copy and save the link in order to return to this view