SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Rotger Griful Sergi) "

Sökning: WFRF:(Rotger Griful Sergi)

  • Resultat 1-7 av 7
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Aliakbaryhosseinabadi, Susan, et al. (författare)
  • The Effects of Noise and Simulated Conductive Hearing Loss on Physiological Response Measures During Interactive Conversations
  • 2023
  • Ingår i: Journal of Speech, Language and Hearing Research. - : AMER SPEECH-LANGUAGE-HEARING ASSOC. - 1092-4388 .- 1558-9102. ; 66:10, s. 4009-4024
  • Tidskriftsartikel (refereegranskat)abstract
    • Purpose: The purpose of this work was to study the effects of background noise and hearing attenuation associated with earplugs on three physiological measures, assumed to be markers of effort investment and arousal, during interactive communication. Method: Twelve pairs of older people (average age of 63.2 years) with ageadjusted normal hearing took part in a face-to-face communication to solve a Diapix task. Communication was held in different levels of babble noise (0, 60, and 70 dBA) and with two levels of hearing attenuation (0 and 25 dB) in quiet. The physiological measures obtained included pupil size, heart rate variability, and skin conductance. In addition, subjective ratings of perceived communication success, frustration, and effort were obtained. Results: Ratings of perceived success, frustration, and effort confirmed that communication was more difficult in noise and with approximately 25-dB hearing attenuation and suggested that the implemented levels of noise and hearing attenuation resulted in comparable communication difficulties. Background noise at 70 dBA and hearing attenuation both led to an initial increase in pupil size (associated with effort), but only the effect of the background noise was sustained throughout the conversation. The 25-dB hearing attenuation led to a significant decrease of the high-frequency power of heart rate variability and a significant increase of skin conductance level, measured as the average z value of the electrodermal activity amplitude. Conclusion: This study demonstrated that several physiological measures appear to be viable indicators of changing communication conditions, with pupillometry and cardiovascular as well as electrodermal measures potentially being markers of communication difficulty.
  •  
2.
  • Favre-Felix, Antoine, et al. (författare)
  • Absolute Eye Gaze Estimation With Biosensors in Hearing Aids
  • 2019
  • Ingår i: Frontiers in Neuroscience. - : Frontiers Media S.A.. - 1662-4548 .- 1662-453X. ; 13
  • Tidskriftsartikel (refereegranskat)abstract
    • People with hearing impairment typically have difficulties following conversations in multi-talker situations. Previous studies have shown that utilizing eye gaze to steer audio through beamformers could be a solution for those situations. Recent studies have shown that in-ear electrodes that capture electrooculography in the ear (EarEOG) can estimate the eye-gaze relative to the head, when the head was fixed. The head movement can be estimated using motion sensors around the ear to create an estimate of the absolute eye-gaze in the room. In this study, an experiment was designed to mimic a multi-talker situation in order to study and model the EarEOG signal when participants attempted to follow a conversation. Eleven hearing impaired participants were presented speech from the DAT speech corpus (Bo Nielsen et al., 2014), with three targets positioned at -30 degrees, 0 degrees and +30 degrees azimuth. The experiment was run in two setups: one where the participants had their head fixed in a chinrest, and the other where they were free to move their head. The participants task was to focus their visual attention on an LED-indicated target that changed regularly. A model was developed for the relative eye-gaze estimation, taking saccades, fixations, head movement and drift from the electrode-skin half-cell into account. This model explained 90.5% of the variance of the EarEOG when the head was fixed, and 82.6% when the head was free. The absolute eye-gaze was also estimated utilizing that model. When the head was fixed, the estimation of the absolute eye-gaze was reliable. However, due to hardware issues, the estimation of the absolute eye-gaze when the head was free had a variance that was too large to reliably estimate the attended target. Overall, this study demonstrated the potential of estimating absolute eye-gaze using EarEOG and motion sensors around the ear.
  •  
3.
  • Nielsen, Annette Cleveland, et al. (författare)
  • User-Innovated eHealth Solutions for Service Delivery to Older Persons With Hearing Impairment
  • 2018
  • Ingår i: American Journal of Audiology. - : AMER SPEECH-LANGUAGE-HEARING ASSOC. - 1059-0889 .- 1558-9137. ; 27:3, s. 403-416
  • Tidskriftsartikel (refereegranskat)abstract
    • Purpose: The successful design and innovation of eHealth solutions directly involve end users in the process to seek a better understanding of their needs. This article presents user-innovated eHealth solutions targeting older persons with hearing impairment. Our research question was: What are the key users needs, expectations, and visions within future hearing rehabilitation service delivery? Method: We applied a participatory design approach to facilitate the design of future eHealth solutions via focus groups. We involved older persons with hearing impairment (n = 36), significant others (n = 10), and audiologists (n = 8) following 2 methods: (a) human-centered design for interactive systems and (b) user innovation management. Through 3 rounds of focus groups, we facilitated a process progressing from insights and visions for requirements phase 1), to app such as paper version wireframes (Phase 2), and to digital prototypes envisioning future eHealth solutions (Phase 3). Each focus group was video-recorded and photographed, resulting in a rich data set that was analyzed through inductive thematic analysis. Results: The results are presented via (a) a storyboard envisioning future client journeys, (b) 3 key themes for future eHealth solutions, (c) 4 levels of interest and willingness to invest time and effort in digital solutions, and (d) 2 technical savviness types and their different preferences for rehabilitation strategies. Conclusions: Future eHealth solutions must offer personalized rehabilitation strategies that are appropriate for every person with hearing impairment and their level of technical savviness. Thus, a central requirement is anchoring of digital support in the clients everyday life situations by facilitating easy access to personalized information, communication, and leaning milieus. Moreover, the participants visions for eHealth solutions call for providing both traditional analogue and digital services.
  •  
4.
  • Shiell, Martha M, et al. (författare)
  • Eye-movement patterns of hearing-impaired listeners measure comprehension of a multitalker conversation
  • 2021
  • Ingår i: Journal of the Acoustical Society of America. - : American Institute of Physics (AIP). - 0001-4966 .- 1520-8524. ; 149:4, s. A77-A77
  • Tidskriftsartikel (refereegranskat)abstract
    • The ability to understand speech in complex listening environments reflects an interaction of cognitive and sensory capacities that are difficult to capture with behavioural tests. The study of natural listening behaviours may lead to the development of new metrics that better reflect real-life communication abilities. To this end, we investigated the relationship between speech comprehension and eye-movements among hearing-impaired people in a challenging listening situation. While previous research has investigated the effect of background noise on listeners’ gaze patterns with single talkers, the effect of noise in multitalker conversations remains unknown. We tracked eye-movements of seven aided hearing-impaired adults while they viewed video recordings of two life-sized talkers engaged in an unscripted dialogue. Hearing loss ranged from moderate to severe. We used multiple-choice questions to measure participants’ comprehension of the conversation in multitalker babble noise at three different signal-to-noise ratios. All participants made saccades between the two talkers more frequently than the talkers’ conversational turns. This measure tended to correlate positively with participants’ comprehension scores, but the effect was significant in only one signal-to-noise ratio condition. Post-hoc investigation suggests that intertalker saccade rate is driven by an interaction of hearing ability and conversational turn-taking events, which will be further discussed.
  •  
5.
  • Shiell, Martha M., et al. (författare)
  • Multilevel Modeling of Gaze From Listeners With Hearing Loss Following a Realistic Conversation
  • 2023
  • Ingår i: Journal of Speech, Language and Hearing Research. - : AMER SPEECH-LANGUAGE-HEARING ASSOC. - 1092-4388 .- 1558-9102. ; 66:11, s. 4575-4589
  • Tidskriftsartikel (refereegranskat)abstract
    • Purpose: There is a need for tools to study real-world communication abilities in people with hearing loss. We outline a potential method for this that analyzes gaze and use it to answer the question of when and how much listeners with hearing loss look toward a new talker in a conversation.Method: Twenty-two older adults with hearing loss followed a prerecorded two person audiovisual conversation in the presence of babble noise. We compared their eye-gaze direction to the conversation in two multilevel logistic regression (MLR) analyses. First, we split the conversation into events classified by the number of active talkers within a turn or a transition, and we tested if these predicted the listener's gaze. Second, we mapped the odds that a listener gazed toward a new talker over time during a conversation transition.Results: We found no evidence that our conversation events predicted changes in the listener's gaze, but the listener's gaze toward the new talker during a silence-transition was predicted by time: The odds of looking at the new talker increased in an s-shaped curve from at least 0.4 s before to 1 s after the onset of the new talker's speech. A comparison of models with different random effects indicated that more variance was explained by differences between individual conversation events than by differences between individual listeners.Conclusions: MLR modeling of eye-gaze during talker transitions is a promising approach to study a listener's perception of realistic conversation. Our experience provides insight to guide future research with this method.
  •  
6.
  • Skoglund, Martin, 1981-, et al. (författare)
  • Activity Tracking Using Ear-Level Accelerometers
  • 2021
  • Ingår i: Frontiers in digital health. - : Frontiers Media S.A.. - 2673-253X. ; 3
  • Tidskriftsartikel (refereegranskat)abstract
    • Introduction: By means of adding more sensor technology, modern hearing aids (HAs) strive to become better, more personalized, and self-adaptive devices that can handle environmental changes and cope with the day-to-day fitness of the users. The latest HA technology available in the market already combines sound analysis with motion activity classification based on accelerometers to adjust settings. While there is a lot of research in activity tracking using accelerometers in sports applications and consumer electronics, there is not yet much in hearing research. Objective: This study investigates the feasibility of activity tracking with ear-level accelerometers and how it compares to waist-mounted accelerometers, which is a more common measurement location. Method: The activity classification methods in this study are based on supervised learning. The experimental set up consisted of 21 subjects, equipped with two XSens MTw Awinda at ear-level and one at waist-level, performing nine different activities. Results: The highest accuracy on our experimental data as obtained with the combination of Bagging and Classification tree techniques. The total accuracy over all activities and users was 84% (ear-level), 90% (waist-level), and 91% (ear-level + waist-level). Most prominently, the classes, namely, standing, jogging, laying (on one side), laying (face-down), and walking all have an accuracy of above 90%. Furthermore, estimated ear-level step-detection accuracy was 95% in walking and 90% in jogging. Conclusion: It is demonstrated that several activities can be classified, using ear-level accelerometers, with an accuracy that is on par with waist-level. It is indicated that step-detection accuracy is comparable to a high-performance wrist device. These findings are encouraging for the development of activity applications in hearing healthcare.
  •  
7.
  • Skoglund, Martin, et al. (författare)
  • Comparing In-ear EOG for Eye-Movement Estimation With Eye-Tracking : Accuracy, Calibration, and Speech Comprehension
  • 2022
  • Ingår i: Frontiers in Neuroscience. - : Frontiers Media SA. - 1662-4548 .- 1662-453X. ; 16
  • Tidskriftsartikel (refereegranskat)abstract
    • This presentation details and evaluates a method for estimating the attended speaker during a two-person conversation by means of in-ear electro-oculography (EOG). Twenty-five hearing-impaired participants were fitted with molds equipped with EOG electrodes (in-ear EOG) and wore eye-tracking glasses while watching a video of two life-size people in a dialog solving a Diapix task. The dialogue was directionally presented and together with background noise in the frontal hemisphere at 60 dB SPL. During three conditions of steering (none, in-ear EOG, conventional eye-tracking), participants comprehension was periodically measured using multiple-choice questions. Based on eye movement detection by in-ear EOG or conventional eye-tracking, the estimated attended speaker was amplified by 6 dB. In the in-ear EOG condition, the estimate was based on one selected channel pair of electrodes out of 36 possible electrodes. A novel calibration procedure introducing three different metrics was used to select the measurement channel. The in-ear EOG attended speaker estimates were compared to those of the eye-tracker. Across participants, the mean accuracy of in-ear EOG estimation of the attended speaker was 68%, ranging from 50 to 89%. Based on offline simulation, it was established that higher scoring metrics obtained for a channel with the calibration procedure were significantly associated with better data quality. Results showed a statistically significant improvement in comprehension of about 10% in both steering conditions relative to the no-steering condition. Comprehension in the two steering conditions was not significantly different. Further, better comprehension obtained under the in-ear EOG condition was significantly correlated with more accurate estimation of the attended speaker. In conclusion, this study shows promising results in the use of in-ear EOG for visual attention estimation with potential for applicability in hearing assistive devices.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-7 av 7

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy