SwePub
Sök i SwePub databas

  Extended search

Träfflista för sökning "WFRF:(Alickovic Emina) "

Search: WFRF:(Alickovic Emina)

  • Result 1-28 of 28
Sort/group result
   
EnumerationReferenceCoverFind
1.
  • Ala, Tirdad Seifi, et al. (author)
  • Alpha Oscillations During Effortful Continuous Speech: From Scalp EEG to Ear-EEG
  • 2023
  • In: IEEE Transactions on Biomedical Engineering. - : IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC. - 0018-9294 .- 1558-2531. ; 70:4, s. 1264-1273
  • Journal article (peer-reviewed)abstract
    • Objective: The purpose of this study was to investigate alpha power as an objective measure of effortful listening in continuous speech with scalp and ear-EEG. Methods: Scalp and ear-EEG were recorded simultaneously during presentation of a 33-s news clip in the presence of 16-talker babble noise. Four different signal-to-noise ratios (SNRs) were used to manipulate task demand. The effects of changes in SNR were investigated on alpha event-related synchronization (ERS) and desynchronization (ERD). Alpha activity was extracted from scalp EEG using different referencing methods (common average and symmetrical bi-polar) in different regions of the brain (parietal and temporal) and ear-EEG. Results: Alpha ERS decreased with decreasing SNR (i.e., increasing task demand) in both scalp and ear-EEG. Alpha ERS was also positively correlated to behavioural performance which was based on the questions regarding the contents of the speech. Conclusion: Alpha ERS/ERD is better suited to track performance of a continuous speech than listening effort. Significance: EEG alpha power in continuous speech may indicate of how well the speech was perceived and it can be measured with both scalp and Ear-EEG.
  •  
2.
  • Ala, Tirdad Seifi, et al. (author)
  • An exploratory Study of EEG Alpha Oscillation and Pupil Dilation in Hearing-Aid Users During Effortful listening to Continuous Speech
  • 2020
  • In: PLOS ONE. - : PUBLIC LIBRARY SCIENCE. - 1932-6203. ; 15:7
  • Journal article (peer-reviewed)abstract
    • Individuals with hearing loss allocate cognitive resources to comprehend noisy speech in everyday life scenarios. Such a scenario could be when they are exposed to ongoing speech and need to sustain their attention for a rather long period of time, which requires listening effort. Two well-established physiological methods that have been found to be sensitive to identify changes in listening effort are pupillometry and electroencephalography (EEG). However, these measurements have been used mainly for momentary, evoked or episodic effort. The aim of this study was to investigate how sustained effort manifests in pupillometry and EEG, using continuous speech with varying signal-to-noise ratio (SNR). Eight hearing-aid users participated in this exploratory study and performed a continuous speech-in-noise task. The speech material consisted of 30-second continuous streams that were presented from loudspeakers to the right and left side of the listener (+/- 30 degrees azimuth) in the presence of 4-talker background noise (+180 degrees azimuth). The participants were instructed to attend either to the right or left speaker and ignore the other in a randomized order with two different SNR conditions: 0 dB and -5 dB (the difference between the target and the competing talker). The effects of SNR on listening effort were explored objectively using pupillometry and EEG. The results showed larger mean pupil dilation and decreased EEG alpha power in the parietal lobe during the more effortful condition. This study demonstrates that both measures are sensitive to changes in SNR during continuous speech.
  •  
3.
  • Alickovic, Emina, et al. (author)
  • A System Identification Approach to Determining Listening Attention from EEG Signals
  • 2016
  • In: 2016 24TH EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO). - : IEEE. - 9780992862657 - 9781509018918 ; , s. 31-35
  • Conference paper (peer-reviewed)abstract
    • We still have very little knowledge about how ourbrains decouple different sound sources, which is known assolving the cocktail party problem. Several approaches; includingERP, time-frequency analysis and, more recently, regression andstimulus reconstruction approaches; have been suggested forsolving this problem. In this work, we study the problem ofcorrelating of EEG signals to different sets of sound sources withthe goal of identifying the single source to which the listener isattending. Here, we propose a method for finding the number ofparameters needed in a regression model to avoid overlearning,which is necessary for determining the attended sound sourcewith high confidence in order to solve the cocktail party problem.
  •  
4.
  • Alickovic, Emina, et al. (author)
  • A Tutorial on Auditory Attention Identification Methods
  • 2019
  • In: Frontiers in Neuroscience. - : FRONTIERS MEDIA SA. - 1662-4548 .- 1662-453X. ; 13
  • Journal article (peer-reviewed)abstract
    • Auditory attention identification methods attempt to identify the sound source of a listeners interest by analyzing measurements of electrophysiological data. We present a tutorial on the numerous techniques that have been developed in recent decades, and we present an overview of current trends in multivariate correlation-based and model-based learning frameworks. The focus is on the use of linear relations between electrophysiological and audio data. The way in which these relations are computed differs. For example, canonical correlation analysis (CCA) finds a linear subset of electrophysiological data that best correlates to audio data and a similar subset of audio data that best correlates to electrophysiological data. Model-based (encoding and decoding) approaches focus on either of these two sets. We investigate the similarities and differences between these linear model philosophies. We focus on (1) correlation-based approaches (CCA), (2) encoding/decoding models based on dense estimation, and (3) (adaptive) encoding/decoding models based on sparse estimation. The specific focus is on sparsity-driven adaptive encoding models and comparing the methodology in state-of-the-art models found in the auditory literature. Furthermore, we outline the main signal processing pipeline for how to identify the attended sound source in a cocktail party environment from the raw electrophysiological data with all the necessary steps, complemented with the necessary MATLAB code and the relevant references for each step. Our main aim is to compare the methodology of the available methods, and provide numerical illustrations to some of them to get a feeling for their potential. A thorough performance comparison is outside the scope of this tutorial.
  •  
5.
  • Alickovic, Emina, et al. (author)
  • Automatic Detection of Alzheimer Disease Based on Histogram and Random Forest
  • 2020
  • In: PROCEEDINGS OF THE INTERNATIONAL CONFERENCE ON MEDICAL AND BIOLOGICAL ENGINEERING, CMBEBIH 2019. - Cham : SPRINGER. - 9783030179717 - 9783030179700 ; , s. 91-96
  • Conference paper (peer-reviewed)abstract
    • Alzheimer disease is one of the most prevalent dementia types affecting elder population. On-time detection of the Alzheimer disease (AD) is valuable for finding new approaches for the AD treatment. Our primary interest lies in obtaining a reliable, but simple and fast model for automatic AD detection. The approach we introduced in the present contribution to identify AD is based on the application of machine learning (ML) techniques. For the first step, we use histogram to transform brain images to feature vectors, containing the relevant "brain" features, which will later serve as the inputs in the classification step. Next, we use the ML algorithms in the classification task to identify AD. The model presented and elaborated in the present contribution demonstrated satisfactory performances. Experimental results suggested that the Random Forest classifier can discriminate the AD subjects from the control subjects. The presented modeling approach, consisting of the histogram as the feature extractor and Random Forest as the classifier, yielded to the sufficiently high overall accuracy rate of 85.77%.
  •  
6.
  • Alickovic, Emina, et al. (author)
  • Decoding Auditory Attention From EEG Data Using Cepstral Analysis
  • 2023
  • In: ICASSPW 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing Workshops, Proceedings. - : IEEE. - 9798350302615 - 9798350302622
  • Conference paper (peer-reviewed)abstract
    • Recent studies of selective auditory attention have demonstrated that neural responses recorded with electroencephalogram (EEG) can be decoded to classify the attended talker in everyday multitalker cocktail-party environments. This is generally referred to as the auditory attention decoding (AAD) and could lead to a breakthrough for the next-generation of hearing aids (HAs) to have the ability to be cognitively controlled. The aim of this paper is to investigate whether cepstral analysis can be used as a more robust mapping between speech and EEG. Our preliminary analysis revealed an average AAD accuracy of 96%. Moreover, we observed a significant increase in auditory attention classification accuracies with our approach over the use of traditional AAD methods (7% absolute increase). Overall, our exploratory study could open a new avenue for developing new AAD methods to further advance hearing technology. We recognize that additional research is needed to elucidate the full potential of cepstral analysis for AAD.
  •  
7.
  • Alickovic, Emina, et al. (author)
  • Effects of Hearing Aid Noise Reduction on Early and Late Cortical Representations of Competing Talkers in Noise
  • 2021
  • In: Frontiers in Neuroscience. - : Frontiers Media S.A.. - 1662-4548 .- 1662-453X. ; 15
  • Journal article (peer-reviewed)abstract
    • Objectives Previous research using non-invasive (magnetoencephalography, MEG) and invasive (electrocorticography, ECoG) neural recordings has demonstrated the progressive and hierarchical representation and processing of complex multi-talker auditory scenes in the auditory cortex. Early responses (<85 ms) in primary-like areas appear to represent the individual talkers with almost equal fidelity and are independent of attention in normal-hearing (NH) listeners. However, late responses (>85 ms) in higher-order non-primary areas selectively represent the attended talker with significantly higher fidelity than unattended talkers in NH and hearing-impaired (HI) listeners. Motivated by these findings, the objective of this study was to investigate the effect of a noise reduction scheme (NR) in a commercial hearing aid (HA) on the representation of complex multi-talker auditory scenes in distinct hierarchical stages of the auditory cortex by using high-density electroencephalography (EEG). Design We addressed this issue by investigating early (<85 ms) and late (>85 ms) EEG responses recorded in 34 HI subjects fitted with HAs. The HA noise reduction (NR) was either on or off while the participants listened to a complex auditory scene. Participants were instructed to attend to one of two simultaneous talkers in the foreground while multi-talker babble noise played in the background (+3 dB SNR). After each trial, a two-choice question about the content of the attended speech was presented. Results Using a stimulus reconstruction approach, our results suggest that the attention-related enhancement of neural representations of target and masker talkers located in the foreground, as well as suppression of the background noise in distinct hierarchical stages is significantly affected by the NR scheme. We found that the NR scheme contributed to the enhancement of the foreground and of the entire acoustic scene in the early responses, and that this enhancement was driven by better representation of the target speech. We found that the target talker in HI listeners was selectively represented in late responses. We found that use of the NR scheme resulted in enhanced representations of the target and masker speech in the foreground and a suppressed representation of the noise in the background in late responses. We found a significant effect of EEG time window on the strengths of the cortical representation of the target and masker. Conclusion Together, our analyses of the early and late responses obtained from HI listeners support the existing view of hierarchical processing in the auditory cortex. Our findings demonstrate the benefits of a NR scheme on the representation of complex multi-talker auditory scenes in different areas of the auditory cortex in HI listeners.
  •  
8.
  • Alickovic, Emina, et al. (author)
  • Ensemble SVM Method for Automatic Sleep Stage Classification
  • 2018
  • In: IEEE Transactions on Instrumentation and Measurement. - : IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC. - 0018-9456 .- 1557-9662. ; 67:6, s. 1258-1265
  • Journal article (peer-reviewed)abstract
    • Sleep scoring is used as a diagnostic technique in the diagnosis and treatment of sleep disorders. Automated sleep scoring is crucial, since the large volume of data should be analyzed visually by the sleep specialists which is burdensome, time-consuming tedious, subjective, and error prone. Therefore, automated sleep stage classification is a crucial step in sleep research and sleep disorder diagnosis. In this paper, a robust system, consisting of three modules, is proposed for automated classification of sleep stages from the single-channel electroencephalogram (EEG). In the first module, signals taken from Pz-Oz electrode were denoised using multiscale principal component analysis. In the second module, the most informative features are extracted using discrete wavelet transform (DWT), and then, statistical values of DWT subbands are calculated. In the third module, extracted features were fed into an ensemble classifier, which can be called as rotational support vector machine (RotSVM). The proposed classifier combines advantages of the principal component analysis and SVM to improve classification performances of the traditional SVM. The sensitivity and accuracy values across all subjects were 84.46% and 91.1%, respectively, for the five-stage sleep classification with Cohens kappa coefficient of 0.88. Obtained classification performance results indicate that, it is possible to have an efficient sleep monitoring system with a single-channel EEG, and can be used effectively in medical and home-care applications.
  •  
9.
  • Alickovic, Emina, et al. (author)
  • Medical Decision Support System for Diagnosis of Heart Arrhythmia using DWT and Random Forests Classifier
  • 2016
  • In: Journal of medical systems. - : SPRINGER. - 0148-5598 .- 1573-689X. ; 40:4, s. 108-
  • Journal article (peer-reviewed)abstract
    • In this study, Random Forests (RF) classifier is proposed for ECG heartbeat signal classification in diagnosis of heart arrhythmia. Discrete wavelet transform (DWT) is used to decompose ECG signals into different successive frequency bands. A set of different statistical features were extracted from the obtained frequency bands to denote the distribution of wavelet coefficients. This study shows that RF classifier achieves superior performances compared to other decision tree methods using 10-fold cross-validation for the ECG datasets and the obtained results suggest that further significant improvements in terms of classification accuracy can be accomplished by the proposed classification system. Accurate ECG signal classification is the major requirement for detection of all arrhythmia types. Performances of the proposed system have been evaluated on two different databases, namely MIT-BIH database and St. -Petersburg Institute of Cardiological Technics 12-lead Arrhythmia Database. For MIT-BIH database, RF classifier yielded an overall accuracy 99.33 % against 98.44 and 98.67 % for the C4.5 and CART classifiers, respectively. For St. -Petersburg Institute of Cardiological Technics 12-lead Arrhythmia Database, RF classifier yielded an overall accuracy 99.95 % against 99.80 % for both C4.5 and CART classifiers, respectively. The combined model with multiscale principal component analysis (MSPCA) de-noising, discrete wavelet transform (DWT) and RF classifier also achieves better performance with the area under the receiver operating characteristic (ROC) curve (AUC) and F- measure equal to 0.999 and 0.993 for MIT-BIH database and 1 and 0.999 for and St. Petersburg Institute of Cardiological Technics 12-lead Arrhythmia Database, respectively. Obtained results demonstrate that the proposed system has capacity for reliable classification of ECG signals, and to assist the clinicians for making an accurate diagnosis of cardiovascular disorders (CVDs).
  •  
10.
  • Alickovic, Emina, et al. (author)
  • Neural Representation Enhanced for Speech and Reduced for Background Noise With a Hearing Aid Noise Reduction Scheme During a Selective Attention Task
  • 2020
  • In: Frontiers in Neuroscience. - : FRONTIERS MEDIA SA. - 1662-4548 .- 1662-453X. ; 14
  • Journal article (peer-reviewed)abstract
    • Objectives Selectively attending to a target talker while ignoring multiple interferers (competing talkers and background noise) is more difficult for hearing-impaired (HI) individuals compared to normal-hearing (NH) listeners. Such tasks also become more difficult as background noise levels increase. To overcome these difficulties, hearing aids (HAs) offer noise reduction (NR) schemes. The objective of this study was to investigate the effect of NR processing (inactive, where the NR feature was switched off,vs.active, where the NR feature was switched on) on the neural representation of speech envelopes across two different background noise levels [+3 dB signal-to-noise ratio (SNR) and +8 dB SNR] by using a stimulus reconstruction (SR) method. Design To explore how NR processing supports the listeners selective auditory attention, we recruited 22 HI participants fitted with HAs. To investigate the interplay between NR schemes, background noise, and neural representation of the speech envelopes, we used electroencephalography (EEG). The participants were instructed to listen to a target talker in front while ignoring a competing talker in front in the presence of multi-talker background babble noise. Results The results show that the neural representation of the attended speech envelope was enhanced by the active NR scheme for both background noise levels. The neural representation of the attended speech envelope at lower (+3 dB) SNR was shifted, approximately by 5 dB, toward the higher (+8 dB) SNR when the NR scheme was turned on. The neural representation of the ignored speech envelope was modulated by the NR scheme and was mostly enhanced in the conditions with more background noise. The neural representation of the background noise was modulated (i.e., reduced) by the NR scheme and was significantly reduced in the conditions with more background noise. The neural representation of the net sum of the ignored acoustic scene (ignored talker and background babble) was not modulated by the NR scheme but was significantly reduced in the conditions with a reduced level of background noise. Taken together, we showed that the active NR scheme enhanced the neural representation of both the attended and the ignored speakers and reduced the neural representation of background noise, while the net sum of the ignored acoustic scene was not enhanced. Conclusion Altogether our results support the hypothesis that the NR schemes in HAs serve to enhance the neural representation of speech and reduce the neural representation of background noise during a selective attention task. We contend that these results provide a neural index that could be useful for assessing the effects of HAs on auditory and cognitive processing in HI populations.
  •  
11.
  • Alickovic, Emina, et al. (author)
  • Normalized Neural Networks for Breast Cancer Classification
  • 2020
  • In: PROCEEDINGS OF THE INTERNATIONAL CONFERENCE ON MEDICAL AND BIOLOGICAL ENGINEERING, CMBEBIH 2019. - Cham : SPRINGER. - 9783030179717 - 9783030179700 ; , s. 519-524
  • Conference paper (peer-reviewed)abstract
    • In almost all parts of the world, breast cancer is one of the major causes of death among women. But at the same time, it is one of the most curable cancers if it is diagnosed at early stage. This paper tries to find a model that diagnose and classify breast cancer with high accuracy and help to both patients and doctors in the future. Here we develop a model using Normalized Multi Layer Perceptron Neural Network to classify breast cancer with high accuracy. The results achieved is very good (accuracy is 99.27%). It is very promising result compared to previous researches where Artificial Neural Networks were used. As benchmark test, Breast Cancer Wisconsin (Original) was used.
  •  
12.
  • Alickovic, Emina, et al. (author)
  • Performance evaluation of empirical mode decomposition, discrete wavelet transform, and wavelet packed decomposition for automated epileptic seizure detection and prediction
  • 2018
  • In: Biomedical Signal Processing and Control. - : ELSEVIER SCI LTD. - 1746-8094 .- 1746-8108. ; 39, s. 94-102
  • Journal article (peer-reviewed)abstract
    • This study proposes a new model which is fully specified for automated seizure onset detection and seizure onset prediction based on electroencephalography (EEG) measurements. We processed two archetypal EEG databases, Freiburg (intracranial EEG) and CHB-MIT (scalp EEG), to find if our model could outperform the state-of-the art models. Four key components define our model: (1) multiscale principal component analysis for EEG de-noising, (2) EEG signal decomposition using either empirical mode decomposition, discrete wavelet transform or wavelet packet decomposition, (3) statistical measures to extract relevant features, (4) machine learning algorithms. Our model achieved overall accuracy of 100% in ictal vs. inter-ictal EEG for both databases. In seizure onset prediction, it could discriminate between inter-ictal, pre-ictal, and ictal EEG with the accuracy of 99.77%, and between inter-ictal and pre-ictal EEG states with the accuracy of 99.70%. The proposed model is general and should prove applicable to other classification tasks including detection and prediction regarding bio-signals such as EMG and ECG. (C) 2017 Elsevier Ltd. All rights reserved.
  •  
13.
  • Alickovic, Emina, et al. (author)
  • Predicting EEG Responses to Attended Speech via Deep Neural Networks for Speech
  • 2023
  • In: 2023 45TH ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE &amp; BIOLOGY SOCIETY, EMBC. - : IEEE. - 9798350324471 - 9798350324488
  • Conference paper (peer-reviewed)abstract
    • Attending to the speech stream of interest in multi-talker environments can be a challenging task, particularly for listeners with hearing impairment. Research suggests that neural responses assessed with electroencephalography (EEG) are modulated by listener's auditory attention, revealing selective neural tracking (NT) of the attended speech. NT methods mostly rely on hand-engineered acoustic and linguistic speech features to predict the neural response. Only recently, deep neural network (DNN) models without specific linguistic information have been used to extract speech features for NT, demonstrating that speech features in hierarchical DNN layers can predict neural responses throughout the auditory pathway. In this study, we go one step further to investigate the suitability of similar DNN models for speech to predict neural responses to competing speech observed in EEG. We recorded EEG data using a 64-channel acquisition system from 17 listeners with normal hearing instructed to attend to one of two competing talkers. Our data revealed that EEG responses are significantly better predicted by DNN-extracted speech features than by hand-engineered acoustic features. Furthermore, analysis of hierarchical DNN layers showed that early layers yielded the highest predictions. Moreover, we found a significant increase in auditory attention classification accuracies with the use of DNN-extracted speech features over the use of hand-engineered acoustic features. These findings open a new avenue for development of new NT measures to evaluate and further advance hearing technology.
  •  
14.
  • Baboukani, Payam Shahsavari, et al. (author)
  • EEG Phase Synchrony Reflects SNR Levels During Continuous Speech-in-Noise Tasks
  • 2021
  • In: 2021 43RD ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE &amp; BIOLOGY SOCIETY (EMBC). - : IEEE. - 9781728111797 ; , s. 531-534
  • Conference paper (peer-reviewed)abstract
    • Comprehension of speech in noise is a challenge for hearing-impaired (HI) individuals. Electroencephalography (EEG) provides a tool to investigate the effect of different levels of signal-to-noise ratio (SNR) of the speech. Most studies with EEG have focused on spectral power in well-defined frequency bands such as alpha band. In this study, we investigate how local functional connectivity, i.e. functional connectivity within a localized region of the brain, is affected by two levels of SNR. Twenty-two HI participants performed a continuous speech in noise task at two different SNRs (+3 dB and +8 dB). The local connectivity within eight regions of interest was computed by using a multivariate phase synchrony measure on EEG data. The results showed that phase synchrony increased in the parietal and frontal area as a response to increasing SNR. We contend that local connectivity measures can be used to discriminate between speech-evoked EEG responses at different SNRs.
  •  
15.
  • Bachmann, Florine L., et al. (author)
  • Extending Subcortical EEG Responses to Continuous Speech to the Sound-Field
  • 2024
  • In: TRENDS IN HEARING. - : SAGE PUBLICATIONS INC. - 2331-2165. ; 28
  • Journal article (peer-reviewed)abstract
    • The auditory brainstem response (ABR) is a valuable clinical tool for objective hearing assessment, which is conventionally detected by averaging neural responses to thousands of short stimuli. Progressing beyond these unnatural stimuli, brainstem responses to continuous speech presented via earphones have been recently detected using linear temporal response functions (TRFs). Here, we extend earlier studies by measuring subcortical responses to continuous speech presented in the sound-field, and assess the amount of data needed to estimate brainstem TRFs. Electroencephalography (EEG) was recorded from 24 normal hearing participants while they listened to clicks and stories presented via earphones and loudspeakers. Subcortical TRFs were computed after accounting for non-linear processing in the auditory periphery by either stimulus rectification or an auditory nerve model. Our results demonstrated that subcortical responses to continuous speech could be reliably measured in the sound-field. TRFs estimated using auditory nerve models outperformed simple rectification, and 16 minutes of data was sufficient for the TRFs of all participants to show clear wave V peaks for both earphones and sound-field stimuli. Subcortical TRFs to continuous speech were highly consistent in both earphone and sound-field conditions, and with click ABRs. However, sound-field TRFs required slightly more data (16 minutes) to achieve clear wave V peaks compared to earphone TRFs (12 minutes), possibly due to effects of room acoustics. By investigating subcortical responses to sound-field speech stimuli, this study lays the groundwork for bringing objective hearing assessment closer to real-life conditions, which may lead to improved hearing evaluations and smart hearing technologies.
  •  
16.
  • Fiedler, Lorenz, et al. (author)
  • Hearing Aid Noise Reduction Lowers the Sustained Listening Effort During Continuous Speech in Noise-A Combined Pupillometry and EEG Study
  • 2021
  • In: Ear and Hearing. - : LIPPINCOTT WILLIAMS & WILKINS. - 0196-0202 .- 1538-4667. ; 42:6, s. 1590-1601
  • Journal article (peer-reviewed)abstract
    • Objectives: The investigation of auditory cognitive processes recently moved from strictly controlled, trial-based paradigms toward the presentation of continuous speech. This also allows the investigation of listening effort on larger time scales (i.e., sustained listening effort). Here, we investigated the modulation of sustained listening effort by a noise reduction algorithm as applied in hearing aids in a listening scenario with noisy continuous speech. The investigated directional noise reduction algorithm mainly suppresses noise from the background. Design: We recorded the pupil size and the EEG in 22 participants with hearing loss who listened to audio news clips in the presence of background multi-talker babble noise. We estimated how noise reduction (off, on) and signal-to-noise ratio (SNR; +3 dB, +8 dB) affect pupil size and the power in the parietal EEG alpha band (i.e., parietal alpha power) as well as the behavioral performance. Results: Our results show that noise reduction reduces pupil size, while there was no significant effect of the SNR. It is important to note that we found interactions of SNR and noise reduction, which suggested that noise reduction reduces pupil size predominantly under the lower SNR. Parietal alpha power showed a similar yet nonsignificant pattern, with increased power under easier conditions. In line with the participants reports that one of the two presented talkers was more intelligible, we found a reduced pupil size, increased parietal alpha power, and better performance when people listened to the more intelligible talker. Conclusions: We show that the modulation of sustained listening effort (e.g., by hearing aid noise reduction) as indicated by pupil size and parietal alpha power can be studied under more ecologically valid conditions. Mainly concluded from pupil size, we demonstrate that hearing aid noise reduction lowers sustained listening effort. Our study approximates to real-world listening scenarios and evaluates the benefit of the signal processing as can be found in a modern hearing aid.
  •  
17.
  • Geirnaert, Simon, et al. (author)
  • Reinforcement Learning in Reproducing Kernel Hilbert Spaces: Enabling Continuous Brain?Machine Interface Adaptation
  • 2021
  • In: IEEE signal processing magazine (Print). - : IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC. - 1053-5888 .- 1558-0792. ; 38:4, s. 89-102
  • Journal article (peer-reviewed)abstract
    • This tutorial reviews a series of reinforcement learning (RL) methods implemented in a reproducing kernel Hilbert space (RKHS) developed to address the challenges imposed on decoder design. RL-based decoders enable the user to learn the prosthesis control through interactions without desired signals and better represent the subjects goal to complete the task. The numerous actions in complex tasks and nonstationary neural states form a vast and dynamic state-action space, imposing a computational challenge in the decoder to detect the emerging neural patterns as well as quickly establish and adjust the globally optimal policy.
  •  
18.
  • Keding, Oskar, et al. (author)
  • Coherence Estimation Tracks Auditory Attention in Listeners with Hearing Impairment
  • 2023
  • In: INTERSPEECH 2023. ; , s. 5162-5166
  • Conference paper (peer-reviewed)abstract
    • Coherence estimation between speech envelope and electroencephalography (EEG) is a proven method in neural speech tracking. This paper proposes an improved coherence estimation algorithm which utilises phase sensitive multitaper cross-spectral estimation. Estimated EEG coherence differences between attended and ignored speech envelopes for a hearing impaired (HI) population are evaluated and compared. Testing was made on 31 HI subjects and showed significant coherence differences for grand averages over the delta, theta, and alpha EEG bands. Significance of increased coherence for attended speech was stronger for the new method compared to the traditional method. The new method of estimating EEG coherence, improves statistical detection performance and enables more rigorous data-based hypothesis-testing results.
  •  
19.
  • Kulasingham, Joshua, et al. (author)
  • Predictors for estimating subcortical EEG responses to continuous speech
  • 2024
  • In: PLOS ONE. - : PUBLIC LIBRARY SCIENCE. - 1932-6203. ; 19:2
  • Journal article (peer-reviewed)abstract
    • Perception of sounds and speech involves structures in the auditory brainstem that rapidly process ongoing auditory stimuli. The role of these structures in speech processing can be investigated by measuring their electrical activity using scalp-mounted electrodes. However, typical analysis methods involve averaging neural responses to many short repetitive stimuli that bear little relevance to daily listening environments. Recently, subcortical responses to more ecologically relevant continuous speech were detected using linear encoding models. These methods estimate the temporal response function (TRF), which is a regression model that minimises the error between the measured neural signal and a predictor derived from the stimulus. Using predictors that model the highly non-linear peripheral auditory system may improve linear TRF estimation accuracy and peak detection. Here, we compare predictors from both simple and complex peripheral auditory models for estimating brainstem TRFs on electroencephalography (EEG) data from 24 participants listening to continuous speech. We also investigate the data length required for estimating subcortical TRFs, and find that around 12 minutes of data is sufficient for clear wave V peaks (>3 dB SNR) to be seen in nearly all participants. Interestingly, predictors derived from simple filterbank-based models of the peripheral auditory system yield TRF wave V peak SNRs that are not significantly different from those estimated using a complex model of the auditory nerve, provided that the nonlinear effects of adaptation in the auditory system are appropriately modelled. Crucially, computing predictors from these simpler models is more than 50 times faster compared to the complex model. This work paves the way for efficient modelling and detection of subcortical processing of continuous speech, which may lead to improved diagnosis metrics for hearing impairment and assistive hearing technology.
  •  
20.
  • Lunner, Thomas, et al. (author)
  • Three New Outcome Measures That Tap Into Cognitive Processes Required for Real-Life Communication
  • 2020
  • In: Ear and Hearing. - : Lippincott Williams & Wilkins. - 0196-0202 .- 1538-4667. ; 41, s. 39S-47S
  • Journal article (peer-reviewed)abstract
    • To increase the ecological validity of outcomes from laboratory evaluations of hearing and hearing devices, it is desirable to introduce more realistic outcome measures in the laboratory. This article presents and discusses three outcome measures that have been designed to go beyond traditional speech-in-noise measures to better reflect realistic everyday challenges. The outcome measures reviewed are: the Sentence-final Word Identification and Recall (SWIR) test that measures working memory performance while listening to speech in noise at ceiling performance; a neural tracking method that produces a quantitative measure of selective speech attention in noise; and pupillometry that measures changes in pupil dilation to assess listening effort while listening to speech in noise. According to evaluation data, the SWIR test provides a sensitive measure in situations where speech perception performance might be unaffected. Similarly, pupil dilation has also shown sensitivity in situations where traditional speech-in-noise measures are insensitive. Changes in working memory capacity and effort mobilization were found at positive signal-to-noise ratios (SNR), that is, at SNRs that might reflect everyday situations. Using stimulus reconstruction, it has been demonstrated that neural tracking is a robust method at determining to what degree a listener is attending to a specific talker in a typical cocktail party situation. Using both established and commercially available noise reduction schemes, data have further shown that all three measures are sensitive to variation in SNR. In summary, the new outcome measures seem suitable for testing hearing and hearing devices under more realistic and demanding everyday conditions than traditional speech-in-noise tests.
  •  
21.
  • Shahsavari Baboukani, Payam, et al. (author)
  • Estimating Conditional Transfer Entropy in Time Series Using Mutual Information and Nonlinear Prediction
  • 2020
  • In: Entropy. - : MDPI. - 1099-4300. ; 22:10
  • Journal article (peer-reviewed)abstract
    • We propose a new estimator to measure directed dependencies in time series. The dimensionality of data is first reduced using a new non-uniform embedding technique, where the variables are ranked according to a weighted sum of the amount of new information and improvement of the prediction accuracy provided by the variables. Then, using a greedy approach, the most informative subsets are selected in an iterative way. The algorithm terminates, when the highest ranked variable is not able to significantly improve the accuracy of the prediction as compared to that obtained using the existing selected subsets. In a simulation study, we compare our estimator to existing state-of-the-art methods at different data lengths and directed dependencies strengths. It is demonstrated that the proposed estimator has a significantly higher accuracy than that of existing methods, especially for the difficult case, where the data are highly correlated and coupled. Moreover, we show its false detection of directed dependencies due to instantaneous couplings effect is lower than that of existing measures. We also show applicability of the proposed estimator on real intracranial electroencephalography data.
  •  
22.
  • Shahsavari Baboukani, Payam, et al. (author)
  • Speech to noise ratio improvement induces nonlinear parietal phase synchrony in hearing aid users
  • 2022
  • In: Frontiers in Neuroscience. - : Frontiers Media SA. - 1662-4548 .- 1662-453X. ; 16
  • Journal article (peer-reviewed)abstract
    • ObjectivesComprehension of speech in adverse listening conditions is challenging for hearing-impaired (HI) individuals. Noise reduction (NR) schemes in hearing aids (HAs) have demonstrated the capability to help HI to overcome these challenges. The objective of this study was to investigate the effect of NR processing (inactive, where the NR feature was switched off, vs. active, where the NR feature was switched on) on correlates of listening effort across two different background noise levels [+3 dB signal-to-noise ratio (SNR) and +8 dB SNR] by using a phase synchrony analysis of electroencephalogram (EEG) signals. DesignThe EEG was recorded while 22 HI participants fitted with HAs performed a continuous speech in noise (SiN) task in the presence of background noise and a competing talker. The phase synchrony within eight regions of interest (ROIs) and four conventional EEG bands was computed by using a multivariate phase synchrony measure. ResultsThe results demonstrated that the activation of NR in HAs affects the EEG phase synchrony in the parietal ROI at low SNR differently than that at high SNR. The relationship between conditions of the listening task and phase synchrony in the parietal ROI was nonlinear. ConclusionWe showed that the activation of NR schemes in HAs can non-linearly reduce correlates of listening effort as estimated by EEG-based phase synchrony. We contend that investigation of the phase synchrony within ROIs can reflect the effects of HAs in HI individuals in ecological listening conditions.
  •  
23.
  • Subasi, Abdulhamit, et al. (author)
  • Diagnosis of Chronic Kidney Disease by Using Random Forest
  • 2017
  • Conference paper (peer-reviewed)abstract
    • Chronic kidney disease (CKD) is a global public health problem, affecting approximately 10% of the population worldwide. Yet, there is little direct evidence on how CKD can be diagnosed in a systematic and automatic manner. This paper investigates how CKD can be diagnosed by using machine learning (ML) techniques. ML algorithms have been a driving force in detection of abnormalities in different physiological data, and are, with a great success, employed in different classification tasks. In the present study, a number of different ML classifiers are experimentally validated to a real data set, taken from the UCI Machine Learning Repository, and our findings are compared with the findings reported in the recent literature. The results are quantitatively and qualitatively discussed and our findings reveal that the random forest (RF) classifier achieves the near-optimal performances on the identification of CKD subjects. Hence, we show that ML algorithms serve important function in diagnosis of CKD, with satisfactory robustness, and our findings suggest that RF can also be utilized for the diagnosis of similar diseases.
  •  
24.
  • Subasi, Abdulhamit, et al. (author)
  • Effect of photic stimulation for migraine detection using random forest and discrete wavelet transform
  • 2019
  • In: Biomedical Signal Processing and Control. - : ELSEVIER SCI LTD. - 1746-8094 .- 1746-8108. ; 49, s. 231-239
  • Journal article (peer-reviewed)abstract
    • Migraine is a neurological disorder characterized by persisting attacks, underlined by the sensitivity to light. One of the leading reasons that make migraine a bigger issue is that it cannot be diagnosed easily by physicians because of the numerous overlapping symptoms with other diseases, such as epilepsy and tension-headache. Consequently, studies have been growing on how to make a computerized decision support system for diagnosis of migraine. In most laboratory studies, flash stimulation is used during the recording of electroencephalogram (EEG) signals with different frequencies and variable (seconds) time windows. The main contribution of this study is the investigation of the effects of flash stimulation on the classification accuracy, and how to find the effective window length for EEG signal classification. To achieve this, we tested different machine learning algorithms on the EEG signals features extracted by using discrete wavelet transform. Our tests on the real-world dataset, recorded in the laboratory, show that the flash stimulation can improve the classification accuracy for more than 10%. Not surprisingly, it is seen that the same holds for the selection of time window length, i.e. the selection of the proper window length is crucial for the accurate migraine identification. (C) 2018 Elsevier Ltd. All rights reserved.
  •  
25.
  • Tanveer, M Asjid, et al. (author)
  • Deep learning-based auditory attention decoding in listeners with hearing impairment
  • 2024
  • In: Journal of Neural Engineering. - : IOP Publishing Ltd. - 1741-2560 .- 1741-2552. ; 21:3
  • Journal article (peer-reviewed)abstract
    • This study develops a deep learning method for fast auditory attention decoding (AAD) using electroencephalography (EEG) from listeners with hearing impairment. It addresses three classification tasks: differentiating noise from speech-in-noise, classifying the direction of attended speech (left vs. right) and identifying the activation status of hearing aid noise reduction (NR) algorithms (OFF vs. ON). These tasks contribute to our understanding of how hearing technology influences auditory processing in the hearing-impaired population. Method: Deep convolutional neural network (DCNN) models were designed for each task. Two training strategies were employed to clarify the impact of data splitting on AAD tasks: inter-trial, where the testing set used classification windows from trials that the training set hadn't seen, and intra-trial, where the testing set used unseen classification windows from trials where other segments were seen during training. The models were evaluated on EEG data from 31 participants with hearing impairment, listening to competing talkers amidst background noise. Results: Using 1-second classification windows, DCNN models achieve accuracy (ACC) of 69.8\%, 73.3\% and 82.9\% and area-under-curve (AUC) of 77.2\%, 80.6\% and 92.1\% for the three tasks respectively on inter-trial strategy. In the intra-trial strategy, they achieved ACC of 87.9\%, 80.1\% and 97.5\%, along with AUC of 94.6\%, 89.1\%, and 99.8\%. Our DCNN models show good performance on short 1-second EEG samples, making them suitable for real-world applications. Conclusion: Our DCNN models successfully addressed three tasks with short 1-second EEG windows from participants with hearing impairment, showcasing their potential. While the inter-trial strategy demonstrated promise for assessing AAD, the intra-trial approach yielded inflated results, underscoring the important role of proper data splitting in EEG-based AAD tasks. Significance: Our findings showcase the promising potential of EEG-based tools for assessing auditory attention in clinical contexts and advancing hearing technology, while also promoting further exploration of alternative deep learning architectures and their potential constraints.
  •  
26.
  • Wilroth, Johanna, 1994-, et al. (author)
  • Direct Estimation of Linear Filters for EEG Source-Localization in a Competing-Talker Scenario
  • 2023
  • In: Special issue: 22nd IFAC World Congress. - : ELSEVIER. ; , s. 6510-6517
  • Conference paper (peer-reviewed)abstract
    • Hearing-impaired listeners have a reduced ability to selectively attend to sounds of interest amid distracting sounds in everyday environments. This ability is not fully regained with modern hearing technology. A better understanding of the brain mechanisms underlying selective attention during speech processing may lead to brain-controlled hearing aids with improved detection and amplification of the attended speech. Prior work has shown that brain responses to speech, measured with magnetoencephalography (MEG) or electroencephalography (EEG), are modulated by selective attention. These responses can be predicted from the speech signal through linear filters called Temporal Response Functions (TRFs). Unfortunately, these sensor-level predictions are often noisy and do not provide much insight into specific brain source locations. Therefore, a novel method called Neuro-Current Response Functions (NCRFs) was recently introduced to directly estimate linear filters at the brain source level from MEG responses to speech from one talker. However, MEG is not well-suited for wearable and realtime hearing technologies. This work aims to adapt the NCRF method for EEG under more realistic listening environments. EEG data was recorded from a hearing-impaired listener while attending to one of two competing talkers embedded in 16-talker babble noise. Preliminary results indicate that source-localized linear filters can be directly estimated from EEG data in such competing-talker scenarios. Future work will focus on evaluating the current method on a larger dataset and on developing novel methods, which may aid in the improvement of next-generation brain-controlled hearing technology.
  •  
27.
  • Wilroth, Johanna, 1994- (author)
  • Exploring Auditory Attention Using EEG
  • 2024
  • Licentiate thesis (other academic/artistic)abstract
    • Listeners with normal-hearing often overlook their ability to comprehend speech in noisy environments effortlessly. Our brain’s adeptness at identifying and amplifying attended voices while suppressing unwanted background noise, known as the cocktail party problem, has been extensively researched for decades. Yet, many aspects of this complex puzzle remain unsolved and listeners with hearing-impairment still struggle to focus on a specific speaker in noisy environments. While recent intelligent hearing aids have improved noise suppression, the problem of deciding which speaker to enhance remains unsolved, leading to discomfort for many hearing aid users in noisy environments.In this thesis, we explore the complexities of the human brain in challenging auditory environments. Two datasets are investigated where participants were tasked to selectively attend to one of two competing voices, replicating a cocktail-party scenario. The auditory stimuli trigger neurons to generate electrical signals that propagate in all directions. When a substantial number of neurons fire simultaneously, their collective electrical signal becomes detectable by small electrodes placed on the head. This method of measuring brain activity, known as electroencephalography (EEG), holds potential to provide feedback to the hearing aids, enabling adjustments to enhance attended voice(s).EEG data is often noisy, incorporating neural responses with artifacts such as muscle movements, eye blinks and heartbeats. In the first contribution of this thesis, we focus on comparing different manual and automatic artifact-rejection techniques and assessing their impact on auditory attention decoding (AAD).While EEG measurements offer high temporal accuracy, spatial resolution is inferior compared to alternative tools like magnetoencephalography (MEG). This difference poses a considerable challenge for source localization with EEG data. In the second contribution of this thesis, we demonstrate anticipated activity in the auditory cortex using EEG data from a single listener, employing Neuro-Current Response Functions (NCRFs). This method, previously evaluated only with MEG data, holds significant promise in hearing aid development.EEG data may involve both linear and nonlinear components due to the propagation of the electrical signals through brain tissue, skull, and scalp with varying conductivities. In the third contribution, we aim to enhance source localization by introducing a binning-based nonlinear detection and compensation method. The results suggest that compensating for some nonlinear components produces more precise and synchronized source localization compared to original EEG data.In the fourth contribution, we present a novel domain adaptation framework that improves AAD performances for listeners with initially low classification accuracy. This framework focuses on classifying the direction (left or right) of attended speech and shows a significant accuracy improvement when transporting poor data from one listener to the domain of good data from different listeners.Taken together, the contributions of this thesis hold promise for improving the lives of hearing-impaired individuals by closing the loop between the brain and hearing aids.
  •  
28.
  • Wilroth, Johanna, et al. (author)
  • Improving EEG-based decoding of the locus of auditory attention through domain adaptation
  • 2023
  • In: Journal of Neural Engineering. - : Institute of Physics (IOP). - 1741-2560 .- 1741-2552. ; 20:6
  • Journal article (peer-reviewed)abstract
    • Objective. This paper presents a novel domain adaptation (DA) framework to enhance the accuracy of electroencephalography (EEG)-based auditory attention classification, specifically for classifying the direction (left or right) of attended speech. The framework aims to improve the performances for subjects with initially low classification accuracy, overcoming challenges posed by instrumental and human factors. Limited dataset size, variations in EEG data quality due to factors such as noise, electrode misplacement or subjects, and the need for generalization across different trials, conditions and subjects necessitate the use of DA methods. By leveraging DA methods, the framework can learn from one EEG dataset and adapt to another, potentially resulting in more reliable and robust classification models. Approach. This paper focuses on investigating a DA method, based on parallel transport, for addressing the auditory attention classification problem. The EEG data utilized in this study originates from an experiment where subjects were instructed to selectively attend to one of the two spatially separated voices presented simultaneously. Main results. Significant improvement in classification accuracy was observed when poor data from one subject was transported to the domain of good data from different subjects, as compared to the baseline. The mean classification accuracy for subjects with poor data increased from 45.84% to 67.92%. Specifically, the highest achieved classification accuracy from one subject reached 83.33%, a substantial increase from the baseline accuracy of 43.33%. Significance. The findings of our study demonstrate the improved classification performances achieved through the implementation of DA methods. This brings us a step closer to leveraging EEG in neuro-steered hearing devices. © 2023 The Author(s). Published by IOP Publishing Ltd.
  •  
Skapa referenser, mejla, bekava och länka
  • Result 1-28 of 28
Type of publication
journal article (18)
conference paper (9)
licentiate thesis (1)
Type of content
peer-reviewed (27)
other academic/artistic (1)
Author/Editor
Alickovic, Emina (26)
Graversen, Carina (9)
Lunner, Thomas (7)
Subasi, Abdulhamit (7)
Wendt, Dorothea (5)
Skoglund, Martin (4)
show more...
Ala, Tirdad Seifi (3)
Fiedler, Lorenz (3)
Innes-Brown, Hamish (3)
Eskelund, Kasper (3)
Ostergaard, Jan (3)
Bernhardsson, Bo (2)
Gustafsson, Fredrik (2)
Sandsten, Maria (2)
Kevric, Jasmin (2)
Bachmann, Florine L. (2)
Kulasingham, Joshua (2)
Enqvist, Martin (2)
Ljung, Lennart (1)
Keding, Oskar (1)
Cabrera, Alvaro Fuen ... (1)
Whitmer, William M. ... (1)
Hadley, Lauren V. V. (1)
Rank, Mike L. L. (1)
Whitmer, William M. (1)
Keidser, Gitte (1)
Mendoza, Carlos Fran ... (1)
Segar, Andrew (1)
Ng, Hoi Ning, Elaine ... (1)
Santurette, Sebastie ... (1)
Hietkamp, Renskje (1)
Ng, Hoi Ning Elaine (1)
Dorszewski, Tobias (1)
Christiansen, Thomas ... (1)
Gizzi, Leonardo (1)
Skoglund, Martin A. (1)
Baboukani, Payam Sha ... (1)
Heskebeck, Frida (1)
Schön, Thomas, Profe ... (1)
Geirnaert, Simon (1)
Vandecappelle, Serva ... (1)
de Cheveigne, Alain (1)
Lalor, Edmund (1)
Meyer, Bernd T. (1)
Miran, Sina (1)
Francart, Tom (1)
Bertrand, Alexander (1)
Enqvist, Martin, Ass ... (1)
Bergeling, Carolina, ... (1)
Elaine Ng, Hoi Ning (1)
show less...
University
Linköping University (27)
Lund University (4)
Blekinge Institute of Technology (1)
Language
English (27)
Swedish (1)
Research subject (UKÄ/SCB)
Engineering and Technology (14)
Medical and Health Sciences (9)
Social Sciences (4)
Natural sciences (3)

Year

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Close

Copy and save the link in order to return to this view