SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Rakesh Sumit) "

Sökning: WFRF:(Rakesh Sumit)

  • Resultat 1-10 av 11
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  •  
2.
  •  
3.
  •  
4.
  • Kovács, György, Postdoctoral researcher, 1984-, et al. (författare)
  • Pedagogical Principles in the Online Teaching of NLP : A Retrospection
  • 2021
  • Ingår i: Teaching NLP. - Stroudsburg, PA, USA : Association for Computational Linguistics (ACL). ; , s. 1-12, s. 1-12
  • Konferensbidrag (refereegranskat)abstract
    • The ongoing COVID-19 pandemic has brought online education to the forefront of pedagogical discussions. To make this increased interest sustainable in a post-pandemic era, online courses must be built on strong pedagogical foundations. With a long history of pedagogic research, there are many principles, frameworks, and models available to help teachers in doing so. These models cover different teaching perspectives, such as constructive alignment, feedback, and the learning environment. In this paper, we discuss how we designed and implemented our online Natural Language Processing (NLP) course following constructive alignment and adhering to the pedagogical principles of LTU. By examining our course and analyzing student evaluation forms, we show that we have met our goal and successfully delivered the course. Furthermore, we discuss the additional benefits resulting from the current mode of delivery, including the increased reusability of course content and increased potential for collaboration between universities. Lastly, we also discuss where we can and will further improve the current course design.
  •  
5.
  •  
6.
  • Rakesh, Sumit, 1987-, et al. (författare)
  • Sign Gesture Recognition from Raw Skeleton Information in 3D Using Deep Learning
  • 2021
  • Ingår i: Computer Vision and Image Processing. - Singapore : Springer Nature. ; , s. 184-195
  • Konferensbidrag (refereegranskat)abstract
    • Sign Language Recognition (SLR) minimizes the communication gap when interacting with hearing impaired people, i.e. connects hearing impaired persons and those who require to communicate and don’t understand SLR. This paper focuses on an end-to-end deep learning approach for the recognition of sign gestures recorded with a 3D sensor (e.g., Microsoft Kinect). Typical machine learning based SLR systems require feature extractions before applying machine learning models. These features need to be chosen carefully as the recognition performance heavily relies on them. Our proposed end-to-end approach eradicates this problem by eliminating the need to extract handmade features. Deep learning models can directly work on raw data and learn higher level representations (features) by themselves. To test our hypothesis, we have used two latest and promising deep learning models, Gated Recurrent Unit (GRU) and Bidirectional Long Short Term Memory (BiLSTM) and trained them using only raw data. We have performed comparative analysis among both models and also with the base paper results. Conducted experiments reflected that proposed method outperforms the existing work, where GRU successfully concluded with 70.78% average accuracy with front view training. 
  •  
7.
  • Rakesh, Sumit, 1987-, et al. (författare)
  • Static Palm Sign Gesture Recognition with Leap Motion and Genetic Algorithm
  • 2021
  • Ingår i: 2021 Swedish Artificial Intelligence Society Workshop (SAIS). - : IEEE. ; , s. 54-58
  • Konferensbidrag (refereegranskat)abstract
    • Sign gesture recognition is the field that models sign gestures in order to facilitate communication with hearing and speech impaired people. Sign gestures are recorded with devices like a video camera or a depth camera. Palm gestures are also recorded with the Leap motion sensor. In this paper, we address palm sign gesture recognition using the Leap motion sensor. We extract geometric features from Leap motion recordings. Next, we encode the Genetic Algorithm (GA) for feature selection. Genetically selected features are fed to different classifiers for gesture recognition. Here we have used Support Vector Machine (SVM), Random Forest (RF), and Naive Bayes (NB) classifiers to have their comparative results. The gesture recognition accuracy of 74.00% is recorded with RF classifier on the Leap motion sign gesture dataset.
  •  
8.
  • Saini, Rajkumar, Dr. 1988-, et al. (författare)
  • Imagined Object Recognition Using EEG-Based Neurological Brain Signals
  • 2022
  • Ingår i: Recent Trends in Image Processing and Pattern Recognition (RTIP2R 2021). - Cham : Springer. ; , s. 305-319
  • Konferensbidrag (refereegranskat)abstract
    • Researchers have been using Electroencephalography (EEG) to build Brain-Computer Interfaces (BCIs) systems. They have had a lot of success modeling brain signals for applications, including emotion detection, user identification, authentication, and control. The goal of this study is to employ EEG-based neurological brain signals to recognize imagined objects. The user imagines the object after looking at the same on the monitor screen. The EEG signal is recorded when the user thinks up about the object. These EEG signals were processed using signal processing methods, and machine learning algorithms were trained to classify the EEG signals. The study involves coarse and fine level EEG signal classification. The coarse-level classification categorizes the signals into three classes (Char, Digit, Object), whereas the fine-level classification categorizes the EEG signals into 30 classes. The recognition rates of 97.30%, and 93.64% were recorded at coarse and fine level classification, respectively. Experiments indicate the proposed work outperforms the previous methods.
  •  
9.
  • Simistira Liwicki, Foteini, et al. (författare)
  • Bimodal electroencephalography-functional magnetic resonance imaging dataset for inner-speech recognition
  • 2023
  • Ingår i: Scientific Data. - : Springer Nature. - 2052-4463. ; 10
  • Tidskriftsartikel (refereegranskat)abstract
    • The recognition of inner speech, which could give a ‘voice’ to patients that have no ability to speak or move, is a challenge for brain-computer interfaces (BCIs). A shortcoming of the available datasets is that they do not combine modalities to increase the performance of inner speech recognition. Multimodal datasets of brain data enable the fusion of neuroimaging modalities with complimentary properties, such as the high spatial resolution of functional magnetic resonance imaging (fMRI) and the temporal resolution of electroencephalography (EEG), and therefore are promising for decoding inner speech. This paper presents the first publicly available bimodal dataset containing EEG and fMRI data acquired nonsimultaneously during inner-speech production. Data were obtained from four healthy, right-handed participants during an inner-speech task with words in either a social or numerical category. Each of the 8-word stimuli were assessed with 40 trials, resulting in 320 trials in each modality for each participant. The aim of this work is to provide a publicly available bimodal dataset on inner speech, contributing towards speech prostheses.
  •  
10.
  • Simistira Liwicki, Foteini, et al. (författare)
  • Bimodal pilot study on inner speech decoding reveals the potential of combining EEG and fMRI
  • 2024
  • Annan publikation (övrigt vetenskapligt/konstnärligt)abstract
    • This paper presents the first publicly available bimodal electroencephalography (EEG) / functional magnetic resonance imaging (fMRI) dataset and an open source benchmark for inner speech decoding. Decoding inner speech or thought (expressed through a voice without actual speaking); is a challenge with typical results close to chance level. The dataset comprises 1280 trials (4 subjects, 8 stimuli = 2 categories * 4 words, and 40 trials per stimuli) in each modality. The pilot study reports for the binary classification, a mean accuracy of 71.72\% when combining the two modalities (EEG and fMRI), compared to 62.81% and 56.17% when using EEG, resp. fMRI alone. The same improvement in performance for word classification (8 classes) can be observed (30.29% with combination, 22.19%, and 17.50% without). As such, this paper demonstrates that combining EEG with fMRI is a promising direction for inner speech decoding.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-10 av 11
Typ av publikation
konferensbidrag (6)
tidskriftsartikel (3)
annan publikation (2)
Typ av innehåll
refereegranskat (8)
övrigt vetenskapligt/konstnärligt (3)
Författare/redaktör
Liwicki, Marcus (6)
Mokayed, Hamam (5)
Liwicki, Foteini (3)
Kumar, Rakesh (2)
Abid, Nosheen, 1993- (2)
Kovács, György, Post ... (2)
visa fler...
Zhang, Yan (1)
Korhonen, Laura (1)
Lindholm, Dan (1)
Vertessy, Beata G. (1)
Wang, Mei (1)
Wang, Xin (1)
Liu, Yang (1)
Wang, Dong (1)
Li, Ke (1)
Liu, Ke (1)
Zhang, Yang (1)
Nàgy, Péter (1)
Kominami, Eiki (1)
van der Goot, F. Gis ... (1)
Bonaldo, Paolo (1)
Thum, Thomas (1)
Adams, Christopher M (1)
Minucci, Saverio (1)
Vellenga, Edo (1)
Adewumi, Tosin, 1978 ... (1)
Swärd, Karl (1)
Nilsson, Per (1)
De Milito, Angelo (1)
Zhang, Jian (1)
Shukla, Deepak (1)
Kågedal, Katarina (1)
Chen, Guoqiang (1)
Liu, Wei (1)
Cheetham, Michael E. (1)
Sigurdson, Christina ... (1)
Clarke, Robert (1)
Zhang, Fan (1)
Gonzalez-Alegre, Ped ... (1)
Jin, Lei (1)
Chen, Qi (1)
Taylor, Mark J. (1)
Romani, Luigina (1)
Wang, Ying (1)
Kumar, Ashok (1)
Simons, Matias (1)
Ishaq, Mohammad (1)
Yang, Qian (1)
Algül, Hana (1)
Brest, Patrick (1)
visa färre...
Lärosäte
Luleå tekniska universitet (9)
Umeå universitet (2)
Lunds universitet (2)
Stockholms universitet (1)
Örebro universitet (1)
Linköpings universitet (1)
visa fler...
Karolinska Institutet (1)
Sveriges Lantbruksuniversitet (1)
visa färre...
Språk
Engelska (11)
Forskningsämne (UKÄ/SCB)
Naturvetenskap (10)
Teknik (2)
Medicin och hälsovetenskap (2)
Samhällsvetenskap (1)

År

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy