SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Ge Chenjie 1991) "

Sökning: WFRF:(Ge Chenjie 1991)

  • Resultat 1-10 av 18
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • de Dios, Eddie, et al. (författare)
  • Introduction to Deep Learning in Clinical Neuroscience
  • 2022
  • Ingår i: Acta Neurochirurgica, Supplement. - Cham : Springer International Publishing. - 2197-8395 .- 0065-1419. ; 134, s. 79-89
  • Bokkapitel (övrigt vetenskapligt/konstnärligt)abstract
    • The use of deep learning (DL) is rapidly increasing in clinical neuroscience. The term denotes models with multiple sequential layers of learning algorithms, architecturally similar to neural networks of the brain. We provide examples of DL in analyzing MRI data and discuss potential applications and methodological caveats. Important aspects are data pre-processing, volumetric segmentation, and specific task-performing DL methods, such as CNNs and AEs. Additionally, GAN-expansion and domain mapping are useful DL techniques for generating artificial data and combining several smaller datasets. We present results of DL-based segmentation and accuracy in predicting glioma subtypes based on MRI features. Dice scores range from 0.77 to 0.89. In mixed glioma cohorts, IDH mutation can be predicted with a sensitivity of 0.98 and specificity of 0.97. Results in test cohorts have shown improvements of 5–7% in accuracy, following GAN-expansion of data and domain mapping of smaller datasets. The provided DL examples are promising, although not yet in clinical practice. DL has demonstrated usefulness in data augmentation and for overcoming data variability. DL methods should be further studied, developed, and validated for broader clinical use. Ultimately, DL models can serve as effective decision support systems, and are especially well-suited for time-consuming, detail-focused, and data-ample tasks.
  •  
2.
  • de Oliveira, Roger Alves, et al. (författare)
  • Visualizing The Results From Unsupervised Deep Learning For The Analysis Of Power-Quality Data
  • 2021
  • Ingår i: Cired 2021 - The 26Th International Conference And Exhibition On Electricity Distribution. - : Institution of Engineering and Technology. ; , s. 653-657
  • Konferensbidrag (refereegranskat)abstract
    • This paper presents a visualisation method, based on deep learning, to assist power engineers in the analysis of large amounts of power-quality data. The method assists in extracting and understanding daily, weekly and seasonal variations in harmonic voltage. Measurements from 10 kV and 0.4 kV in a Swedish distribution network are applied to the deep learning method to obtain daily harmonic patterns and their distribution over the week and the year. The results are presented in graphs that allow interpretation of the results without having to understand the mathematical details of the method. The inferences given by the results demonstrate that the method can become a new tool that compresses power quality big data in a form that is easier to interpret.
  •  
3.
  • Ge, Chenjie, 1991, et al. (författare)
  • 3D Multi-Scale Convolutional Networks for Glioma Grading Using MR Images
  • 2018
  • Ingår i: Proceedings - International Conference on Image Processing, ICIP. - 1522-4880. - 9781479970612 ; , s. 141-145
  • Konferensbidrag (refereegranskat)abstract
    • This paper addresses issues of grading brain tumor, glioma, from Magnetic Resonance Images (MRIs). Although feature pyramid is shown to be useful to extract multi-scale features for object recognition, it is rarely explored in MRI images for glioma classification/grading. For glioma grading, existing deep learning methods often use convolutional neural networks (CNNs) to extract single-scale features without considering that the scales of brain tumor features vary depending on structure/shape, size, tissue smoothness, and locations. In this paper, we propose to incorporate the multi-scale feature learning into a deep convolutional network architecture, which extracts multi-scale semantic as well as fine features for glioma tumor grading. The main contributions of the paper are: (a) propose a novel 3D multi-scale convolutional network architecture for the dedicated task of glioma grading; (b) propose a novel feature fusion scheme that further refines multi-scale features generated from multi-scale convolutional layers; (c) propose a saliency-aware strategy to enhance tumor regions of MRIs. Experiments were conducted on an open dataset for classifying high/low grade gliomas. Performance on the test set using the proposed scheme has shown good results (with accuracy of 89.47%).
  •  
4.
  • Ge, Chenjie, 1991, et al. (författare)
  • A spiking neural network model for obstacle avoidance in simulated prosthetic vision
  • 2017
  • Ingår i: Information Sciences. - : Elsevier BV. - 0020-0255. ; 399:August 2017, s. 30-42
  • Tidskriftsartikel (refereegranskat)abstract
    • Limited by visual percepts elicited by existing visual prosthesis, it’s necessary to enhance its functionality to fulfill some challenging tasks for the blind such as obstacle avoidance. This paper argues that spiking neural networks (SNN) are effective techniques for object recognition and introduces for the first time a SNN model for obstacle recognition to as- sist blind people wearing prosthetic vision devices by modelling and classifying spatio- temporal (ST) video data. The proposed methodology is based on a novel spiking neural network architecture, called NeuCube as a general framework for video data modelling in simulated prosthetic vision. As an integrated environment including spiking trains en- coding, input variable mapping, unsupervised reservoir training and supervised classifier training, the NeuCube consists of a spiking neural network reservoir (SNNr) and a dy- namic evolving spiking neural network classifier (deSNN). First, input data is captured by visual prosthesis, then ST feature extraction is utilized in the low-resolution prosthetic vi- sion generated by prostheses. Finally such ST features are fed to the NeuCube to output classification result of obstacle analysis for an early warning system to be activated. Ex- periments on collected video data and comparison with other computational intelligence methods indicate promising results. This makes it possible to directly utilize available neu- romorphic hardware chips, embedded in visual prostheses, to enhance significantly their functionality. The proposed NeuCube-based obstacle avoidance methodology provides use- ful guidance to the blind, thus offering a significant improvement of current prostheses and potentially benefiting future prosthesis wearers.
  •  
5.
  • Ge, Chenjie, 1991, et al. (författare)
  • Co-saliency detection via inter and intra saliency propagation
  • 2016
  • Ingår i: Signal Processing: Image Communication. - 0923-5965. ; 44, s. 69-83
  • Tidskriftsartikel (refereegranskat)abstract
    • The goal of salient object detection from an image is to extract the regions which capture the attention of the human visual system more than other regions of the image. In this paper a novel method is presented for detecting salient objects from a set of images, known as co-saliency detection. We treat co-saliency detection as a two-stage saliency propagation problem. The first inter-saliency propagation stage utilizes the similarity between a pair of images to discover common properties of the images with the help of a single image saliency map. With the pairwise co-salient foreground cue maps obtained, the second intra-saliency propagation stage refines pairwise saliency detection using a graph-based method combining both foreground and background cues. A new fusion strategy is then used to obtain the co-saliency detection results. Finally an integrated multi-scale scheme is employed to obtain pixel-level co-saliency maps. The proposed method makes use of existing saliency detection models for co-saliency detection and is not overly sensitive to the initial saliency model selected. Extensive experiments on three benchmark databases show the superiority of the proposed co-saliency model against the state-of-the-art methods both subjectively and objectively.
  •  
6.
  • Ge, Chenjie, 1991, et al. (författare)
  • Co-saliency detection via similarity-based saliency propagation
  • 2015
  • Ingår i: 2015 IEEE International Conference on Image Processing (ICIP). ; , s. 1845 - 1849
  • Konferensbidrag (refereegranskat)abstract
    • In this paper, we present a method for discovering the common salient objects from a set of images. We treat co-saliency detection as a pairwise saliency propagation problem, which utilizes the similarity between each pair of images to measure the common property with the guidance of a single saliency map image. Given the pairwise co-salient foreground maps, pairwise saliency is optimized by combining the initial background cues. Pairwise co-salient maps are then fused according to a novel fusion strategy based on the focus of human attention. Finally we adopt an integrated multi-scale scheme to obtain the pixel-level saliency map. Our proposed model makes the existing single saliency model perform well in co-saliency detection and is not overly sensitive to the initial saliency model selected. Extensive experiments on two benchmark databases show the superiority of our co-saliency model against the state-of-the-art methods both subjectively and objectively.
  •  
7.
  • Ge, Chenjie, 1991, et al. (författare)
  • Co-Saliency-Enhanced Deep Recurrent Convolutional Networks for Human Fall Detection in E-Healthcare
  • 2018
  • Ingår i: Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS. - 1557-170X. ; 2018-July, s. 1572-1575
  • Konferensbidrag (refereegranskat)abstract
    • This paper addresses the issue of fall detection from videos for e-healthcare and assisted-living. Instead of using conventional hand-crafted features from videos, we propose a fall detection scheme based on co-saliency-enhanced recurrent convolutional network (RCN) architecture for fall detection from videos. In the proposed scheme, a deep learning method RCN is realized by a set of Convolutional Neural Networks (CNNs) in segment-levels followed by a Recurrent Neural Network (RNN), Long Short-Term Memory (LSTM), to handle the time-dependent video frames. The co-saliency-based method enhances salient human activity regions hence further improves the deep learning performance. The main contributions of the paper include: (a) propose a recurrent convolutional network (RCN) architecture that is dedicated to the tasks of human fall detection in videos; (b) integrate a co-saliency enhancement to the deep learning scheme for further improving the deep learning performance; (c) extensive empirical tests for performance analysis and evaluation under different network settings and data partitioning. Experiments using the proposed scheme were conducted on an open dataset containing multicamera videos from different view angles, results have shown very good performance (test accuracy 98.96%). Comparisons with two existing methods have provided further support to the proposed scheme.
  •  
8.
  • Ge, Chenjie, 1991, et al. (författare)
  • Cross-Modality Augmentation of Brain Mr Images Using a Novel Pairwise Generative Adversarial Network for Enhanced Glioma Classification
  • 2019
  • Ingår i: Proceedings - International Conference on Image Processing, ICIP. - 1522-4880.
  • Konferensbidrag (refereegranskat)abstract
    • © 2019 IEEE. Brain Magnetic Resonance Images (MRIs) are commonly used for tumor diagnosis. Machine learning for brain tumor characterization often uses MRIs from many modalities (e.g., T1-MRI, Enhanced-T1-MRI, T2-MRI and FLAIR). This paper tackles two issues that may impact brain tumor characterization performance from deep learning: insufficiently large training dataset, and incomplete collection of MRIs from different modalities. We propose a novel pairwise generative adversarial network (GAN) architecture for generating synthetic brain MRIs in missing modalities by using existing MRIs in other modalities. By improving the training dataset, we aim to mitigate the overfitting and improve the deep learning performance. Main contributions of the paper include: (a) propose a pairwise generative adversarial network (GAN) for brain image augmentation via cross-modality image generation; (b) propose a training strategy to enhance the glioma classification performance, where GAN-augmented images are used for pre-training, followed by refined-training using real brain MRIs; (c) demonstrate the proposed method through tests and comparisons of glioma classifiers that are trained from mixing real and GAN synthetic data, as well as from real data only. Experiments were conducted on an open TCGA dataset, containing 167 subjects for classifying IDH genotypes (mutation or wild-type). Test results from two experimental settings have both provided supports to the proposed method, where glioma classification performance has consistently improved by using mixed real and augmented data (test accuracy 81.03%, with 2.57% improvement).
  •  
9.
  • Ge, Chenjie, 1991, et al. (författare)
  • Deep Feature Clustering for Seeking Patterns in Daily Harmonic Variations
  • 2021
  • Ingår i: IEEE Transactions on Instrumentation and Measurement. - : IEEE. - 0018-9456 .- 1557-9662. ; 70
  • Tidskriftsartikel (refereegranskat)abstract
    • This article proposes a novel scheme for analyzing power system measurement data. The main question that we seek answers in this study is on “whether one can find some important patterns that are hidden in the large data of power system measurements such as variational data.” The proposed scheme uses an unsupervised deep feature learning approach by first employing a deep autoencoder (DAE) followed by feature clustering. An analysis is performed by examining the patterns of clusters and reconstructing the representative data sequence for the clustering centers. The scheme is illustrated by applying it to the daily variations of harmonic voltage distortion in a low-voltage network. The main contributions of the article include: 1) providing a new unsupervised deep feature learning approach for seeking possible underlying patterns of power system variation measurements and 2) proposing an effective empirical analysis approach for understanding the measurements through examining the underlying feature clusters and the associated reconstructed data by DAE.
  •  
10.
  • Ge, Chenjie, 1991, et al. (författare)
  • Deep Learning and Multi-Sensor Fusion for Glioma Classification Using Multistream 2D Convolutional Networks
  • 2018
  • Ingår i: Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS. - 1557-170X. ; , s. 5894-5897
  • Konferensbidrag (refereegranskat)abstract
    • This paper addresses issues of brain tumor, glioma, grading from multi-sensor images. Different types of scanners (or sensors) like enhanced T1-MRI, T2-MRI and FLAIR, show different contrast and are sensitive to different brain tissues and fluid regions. Most existing works use 3D brain images from single sensor. In this paper, we propose a novel multistream deep Convolutional Neural Network (CNN) architecture that extracts and fuses the features from multiple sensors for glioma tumor grading/subcategory grading. The main contributions of the paper are: (a) propose a novel multistream deep CNN architecture for glioma grading; (b) apply sensor fusion from T1-MRI, T2-MRI and/or FLAIR for enhancing performance through feature aggregation; (c) mitigate overfitting by using 2D brain image slices in combination with 2D image augmentation. Two datasets were used for our experiments, one for classifying low/high grade gliomas, another for classifying glioma with/without 1p19q codeletion. Experiments using the proposed scheme have shown good results (with test accuracy of 90.87% for former case, and 89.39 % for the latter case). Comparisons with several existing methods have provided further support to the proposed scheme.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-10 av 18

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy