SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Gupta Vibha) "

Sökning: WFRF:(Gupta Vibha)

  • Resultat 1-6 av 6
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Liwicki, Foteini, et al. (författare)
  • Rethinking the Methods and Algorithms for Inner Speech Decoding and Making Them Reproducible
  • 2022
  • Ingår i: NeuroSci. - : MDPI. - 2673-4087. ; 3:2, s. 226-244
  • Tidskriftsartikel (refereegranskat)abstract
    • This study focuses on the automatic decoding of inner speech using noninvasive methods, such as Electroencephalography (EEG). While inner speech has been a research topic in philosophy and psychology for half a century, recent attempts have been made to decode nonvoiced spoken words by using various brain–computer interfaces. The main shortcomings of existing work are reproducibility and the availability of data and code. In this work, we investigate various methods (using Convolutional Neural Network (CNN), Gated Recurrent Unit (GRU), Long Short-Term Memory Networks (LSTM)) for the detection task of five vowels and six words on a publicly available EEG dataset. The main contributions of this work are (1) subject dependent vs. subject-independent approaches, (2) the effect of different preprocessing steps (Independent Component Analysis (ICA), down-sampling and filtering), and (3) word classification (where we achieve state-of-the-art performance on a publicly available dataset). Overall we achieve a performance accuracy of 35.20% and 29.21% when classifying five vowels and six words, respectively, in a publicly available dataset, using our tuned iSpeech-CNN architecture. All of our code and processed data are publicly available to ensure reproducibility. As such, this work contributes to a deeper understanding and reproducibility of experiments in the area of inner speech detection.
  •  
2.
  •  
3.
  •  
4.
  • Simistira Liwicki, Foteini, et al. (författare)
  • Bimodal electroencephalography-functional magnetic resonance imaging dataset for inner-speech recognition
  • 2023
  • Ingår i: Scientific Data. - : Springer Nature. - 2052-4463. ; 10
  • Tidskriftsartikel (refereegranskat)abstract
    • The recognition of inner speech, which could give a ‘voice’ to patients that have no ability to speak or move, is a challenge for brain-computer interfaces (BCIs). A shortcoming of the available datasets is that they do not combine modalities to increase the performance of inner speech recognition. Multimodal datasets of brain data enable the fusion of neuroimaging modalities with complimentary properties, such as the high spatial resolution of functional magnetic resonance imaging (fMRI) and the temporal resolution of electroencephalography (EEG), and therefore are promising for decoding inner speech. This paper presents the first publicly available bimodal dataset containing EEG and fMRI data acquired nonsimultaneously during inner-speech production. Data were obtained from four healthy, right-handed participants during an inner-speech task with words in either a social or numerical category. Each of the 8-word stimuli were assessed with 40 trials, resulting in 320 trials in each modality for each participant. The aim of this work is to provide a publicly available bimodal dataset on inner speech, contributing towards speech prostheses.
  •  
5.
  • Simistira Liwicki, Foteini, et al. (författare)
  • Bimodal pilot study on inner speech decoding reveals the potential of combining EEG and fMRI
  • 2024
  • Annan publikation (övrigt vetenskapligt/konstnärligt)abstract
    • This paper presents the first publicly available bimodal electroencephalography (EEG) / functional magnetic resonance imaging (fMRI) dataset and an open source benchmark for inner speech decoding. Decoding inner speech or thought (expressed through a voice without actual speaking); is a challenge with typical results close to chance level. The dataset comprises 1280 trials (4 subjects, 8 stimuli = 2 categories * 4 words, and 40 trials per stimuli) in each modality. The pilot study reports for the binary classification, a mean accuracy of 71.72\% when combining the two modalities (EEG and fMRI), compared to 62.81% and 56.17% when using EEG, resp. fMRI alone. The same improvement in performance for word classification (8 classes) can be observed (30.29% with combination, 22.19%, and 17.50% without). As such, this paper demonstrates that combining EEG with fMRI is a promising direction for inner speech decoding.
  •  
6.
  • Sultanian, Pedram, et al. (författare)
  • Prediction of survival in out-of-hospital cardiac arrest: the updated Swedish cardiac arrest risk score (SCARS) model
  • 2024
  • Ingår i: EUROPEAN HEART JOURNAL - DIGITAL HEALTH. - 2634-3916.
  • Tidskriftsartikel (refereegranskat)abstract
    • Aims Out-of-hospital cardiac arrest (OHCA) is a major health concern worldwide. Although one-third of all patients achieve a return of spontaneous circulation and may undergo a difficult period in the intensive care unit, only 1 in 10 survive. This study aims to improve our previously developed machine learning model for early prognostication of survival in OHCA.Methods and results We studied all cases registered in the Swedish Cardiopulmonary Resuscitation Registry during 2010 and 2020 (n = 55 615). We compared the predictive performance of extreme gradient boosting (XGB), light gradient boosting machine (LightGBM), logistic regression, CatBoost, random forest, and TabNet. For each framework, we developed models that optimized (i) a weighted F1 score to penalize models that yielded more false negatives and (ii) a precision-recall area under the curve (PR AUC). LightGBM assigned higher importance values to a larger set of variables, while XGB made predictions using fewer predictors. The area under the curve receiver operating characteristic (AUC ROC) scores for LightGBM were 0.958 (optimized for weighted F1) and 0.961 (optimized for a PR AUC), while for XGB, the scores were 0.958 and 0.960, respectively. The calibration plots showed a subtle underestimation of survival for LightGBM, contrasting with a mild overestimation for XGB models. In the crucial range of 0-10% likelihood of survival, the XGB model, optimized with the PR AUC, emerged as a clinically safe model.Conclusion We improved our previous prediction model by creating a parsimonious model with an AUC ROC at 0.96, with excellent calibration and no apparent risk of underestimating survival in the critical probability range (0-10%). The model is available at www.gocares.se.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-6 av 6

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy