SwePub
Tyck till om SwePub Sök här!
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(De Kanjar) "

Sökning: WFRF:(De Kanjar)

  • Resultat 1-10 av 12
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  •  
2.
  •  
3.
  • De, Kanjar, et al. (författare)
  • Effect of hue shift towards robustness of convolutional neural networks
  • 2022
  • Konferensbidrag (refereegranskat)abstract
    • Computer vision systems become deployed in diverse real time systems hence robustness is a major area of concern. As a vast majority of the AI enabled systems are based on convolutional neural networks based models which use 3-channel RGB images as input. It has been shown that the performance of AI systems, such as those used in classification, is impacted by distortions in the images. To date most work has been carried out on distortions such as noise, blur, compression. However, color related changes to images could also impact the performance. Therefore, the goal of this paper is to study the robustness of these models under different hue shifts.
  •  
4.
  • De, Kanjar (författare)
  • Exploring Effects of Colour and Image Quality in Semantic Segmentation by Deep Learning Methods
  • 2022
  • Ingår i: Journal of Imaging Science and Technology. - : The Society for Imaging Science and Technology. - 1062-3701 .- 1943-3522. ; 66:5, s. 050401-1-050401-10
  • Tidskriftsartikel (refereegranskat)abstract
    • Recent advances in convolutional neural networks and vision transformers have brought about a revolution in the area of computer vision. Studies have shown that the performance of deep learning-based models is sensitive to image quality. The human visual system is trained to infer semantic information from poor quality images, but deep learning algorithms may find it challenging to perform this task. In this paper, we study the effect of image quality and color parameters on deep learning models trained for the task of semantic segmentation. One of the major challenges in benchmarking robust deep learning-based computer vision models is lack of challenging data covering different quality and colour parameters. In this paper, we have generated data using the subset of the standard benchmark semantic segmentation dataset (ADE20K) with the goal of studying the effect of different quality and colour parameters for the semantic segmentation task. To the best of our knowledge, this is one of the first attempts to benchmark semantic segmentation algorithms under different colour and quality parameters, and this study will motivate further research in this direction.
  •  
5.
  • De, Kanjar, et al. (författare)
  • Impact of Colour on Robustness of Deep Neural Networks
  • 2021
  • Ingår i: 2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW). - : IEEE. ; , s. 21-30
  • Konferensbidrag (refereegranskat)abstract
    • Convolutional neural networks have become the most widely used tool for computer vision applications like image classification, segmentation, object localization, etc. Recent studies have shown that the quality of images has a significant impact on the performance of these deep neural networks. The accuracy of the computer vision tasks gets significantly influenced by the image quality due to the shift in the distribution of the images on which the networks are trained on. Although, the effects of perturbations like image noise, image blur, image contrast, compression artifacts, etc. on the performance of deep neural networks on image classification have been studied, the effects of colour and quality of colour in digital images have been a mostly unexplored direction. One of the biggest challenges is that there is no particular dataset dedicated to colour distortions and colour aspects of images in image classification. The main aim of this paper is to study the impact of colour distortions on the performance of image classification using deep neural networks. Experiments performed using multiple state-of–of-the–the-art deep convolutional neural architectures on a proposed colour distorted dataset are presented and the impact of colour on image classification task is demonstrated.
  •  
6.
  •  
7.
  • Liwicki, Foteini, et al. (författare)
  • Rethinking the Methods and Algorithms for Inner Speech Decoding and Making Them Reproducible
  • 2022
  • Ingår i: NeuroSci. - : MDPI. - 2673-4087. ; 3:2, s. 226-244
  • Tidskriftsartikel (refereegranskat)abstract
    • This study focuses on the automatic decoding of inner speech using noninvasive methods, such as Electroencephalography (EEG). While inner speech has been a research topic in philosophy and psychology for half a century, recent attempts have been made to decode nonvoiced spoken words by using various brain–computer interfaces. The main shortcomings of existing work are reproducibility and the availability of data and code. In this work, we investigate various methods (using Convolutional Neural Network (CNN), Gated Recurrent Unit (GRU), Long Short-Term Memory Networks (LSTM)) for the detection task of five vowels and six words on a publicly available EEG dataset. The main contributions of this work are (1) subject dependent vs. subject-independent approaches, (2) the effect of different preprocessing steps (Independent Component Analysis (ICA), down-sampling and filtering), and (3) word classification (where we achieve state-of-the-art performance on a publicly available dataset). Overall we achieve a performance accuracy of 35.20% and 29.21% when classifying five vowels and six words, respectively, in a publicly available dataset, using our tuned iSpeech-CNN architecture. All of our code and processed data are publicly available to ensure reproducibility. As such, this work contributes to a deeper understanding and reproducibility of experiments in the area of inner speech detection.
  •  
8.
  •  
9.
  •  
10.
  • Simistira Liwicki, Foteini, et al. (författare)
  • Bimodal electroencephalography-functional magnetic resonance imaging dataset for inner-speech recognition
  • 2023
  • Ingår i: Scientific Data. - : Springer Nature. - 2052-4463. ; 10
  • Tidskriftsartikel (refereegranskat)abstract
    • The recognition of inner speech, which could give a ‘voice’ to patients that have no ability to speak or move, is a challenge for brain-computer interfaces (BCIs). A shortcoming of the available datasets is that they do not combine modalities to increase the performance of inner speech recognition. Multimodal datasets of brain data enable the fusion of neuroimaging modalities with complimentary properties, such as the high spatial resolution of functional magnetic resonance imaging (fMRI) and the temporal resolution of electroencephalography (EEG), and therefore are promising for decoding inner speech. This paper presents the first publicly available bimodal dataset containing EEG and fMRI data acquired nonsimultaneously during inner-speech production. Data were obtained from four healthy, right-handed participants during an inner-speech task with words in either a social or numerical category. Each of the 8-word stimuli were assessed with 40 trials, resulting in 320 trials in each modality for each participant. The aim of this work is to provide a publicly available bimodal dataset on inner speech, contributing towards speech prostheses.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-10 av 12

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy