SwePub
Tyck till om SwePub Sök här!
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Gu Chenjie) "

Sökning: WFRF:(Gu Chenjie)

  • Resultat 1-10 av 15
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • de Dios, Eddie, et al. (författare)
  • Introduction to Deep Learning in Clinical Neuroscience
  • 2022
  • Ingår i: Acta Neurochirurgica, Supplement. - Cham : Springer International Publishing. - 2197-8395 .- 0065-1419. ; , s. 79-89
  • Bokkapitel (övrigt vetenskapligt/konstnärligt)abstract
    • The use of deep learning (DL) is rapidly increasing in clinical neuroscience. The term denotes models with multiple sequential layers of learning algorithms, architecturally similar to neural networks of the brain. We provide examples of DL in analyzing MRI data and discuss potential applications and methodological caveats. Important aspects are data pre-processing, volumetric segmentation, and specific task-performing DL methods, such as CNNs and AEs. Additionally, GAN-expansion and domain mapping are useful DL techniques for generating artificial data and combining several smaller datasets. We present results of DL-based segmentation and accuracy in predicting glioma subtypes based on MRI features. Dice scores range from 0.77 to 0.89. In mixed glioma cohorts, IDH mutation can be predicted with a sensitivity of 0.98 and specificity of 0.97. Results in test cohorts have shown improvements of 5–7% in accuracy, following GAN-expansion of data and domain mapping of smaller datasets. The provided DL examples are promising, although not yet in clinical practice. DL has demonstrated usefulness in data augmentation and for overcoming data variability. DL methods should be further studied, developed, and validated for broader clinical use. Ultimately, DL models can serve as effective decision support systems, and are especially well-suited for time-consuming, detail-focused, and data-ample tasks.
  •  
2.
  • de Oliveira, Roger Alves, et al. (författare)
  • Visualizing The Results From Unsupervised Deep Learning For The Analysis Of Power-Quality Data
  • 2021
  • Ingår i: Cired 2021 - The 26Th International Conference And Exhibition On Electricity Distribution. - : Institution of Engineering and Technology. ; , s. 653-657
  • Konferensbidrag (refereegranskat)abstract
    • This paper presents a visualisation method, based on deep learning, to assist power engineers in the analysis of large amounts of power-quality data. The method assists in extracting and understanding daily, weekly and seasonal variations in harmonic voltage. Measurements from 10 kV and 0.4 kV in a Swedish distribution network are applied to the deep learning method to obtain daily harmonic patterns and their distribution over the week and the year. The results are presented in graphs that allow interpretation of the results without having to understand the mathematical details of the method. The inferences given by the results demonstrate that the method can become a new tool that compresses power quality big data in a form that is easier to interpret.
  •  
3.
  • Ge, Chenjie, 1991, et al. (författare)
  • 3D Multi-Scale Convolutional Networks for Glioma Grading Using MR Images
  • 2018
  • Ingår i: Proceedings - International Conference on Image Processing, ICIP. - 1522-4880. - 9781479970612 ; , s. 141-145
  • Konferensbidrag (refereegranskat)abstract
    • This paper addresses issues of grading brain tumor, glioma, from Magnetic Resonance Images (MRIs). Although feature pyramid is shown to be useful to extract multi-scale features for object recognition, it is rarely explored in MRI images for glioma classification/grading. For glioma grading, existing deep learning methods often use convolutional neural networks (CNNs) to extract single-scale features without considering that the scales of brain tumor features vary depending on structure/shape, size, tissue smoothness, and locations. In this paper, we propose to incorporate the multi-scale feature learning into a deep convolutional network architecture, which extracts multi-scale semantic as well as fine features for glioma tumor grading. The main contributions of the paper are: (a) propose a novel 3D multi-scale convolutional network architecture for the dedicated task of glioma grading; (b) propose a novel feature fusion scheme that further refines multi-scale features generated from multi-scale convolutional layers; (c) propose a saliency-aware strategy to enhance tumor regions of MRIs. Experiments were conducted on an open dataset for classifying high/low grade gliomas. Performance on the test set using the proposed scheme has shown good results (with accuracy of 89.47%).
  •  
4.
  • Ge, Chenjie, 1991, et al. (författare)
  • Co-Saliency-Enhanced Deep Recurrent Convolutional Networks for Human Fall Detection in E-Healthcare
  • 2018
  • Ingår i: Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS. - 1557-170X. ; , s. 1572-1575
  • Konferensbidrag (refereegranskat)abstract
    • This paper addresses the issue of fall detection from videos for e-healthcare and assisted-living. Instead of using conventional hand-crafted features from videos, we propose a fall detection scheme based on co-saliency-enhanced recurrent convolutional network (RCN) architecture for fall detection from videos. In the proposed scheme, a deep learning method RCN is realized by a set of Convolutional Neural Networks (CNNs) in segment-levels followed by a Recurrent Neural Network (RNN), Long Short-Term Memory (LSTM), to handle the time-dependent video frames. The co-saliency-based method enhances salient human activity regions hence further improves the deep learning performance. The main contributions of the paper include: (a) propose a recurrent convolutional network (RCN) architecture that is dedicated to the tasks of human fall detection in videos; (b) integrate a co-saliency enhancement to the deep learning scheme for further improving the deep learning performance; (c) extensive empirical tests for performance analysis and evaluation under different network settings and data partitioning. Experiments using the proposed scheme were conducted on an open dataset containing multicamera videos from different view angles, results have shown very good performance (test accuracy 98.96%). Comparisons with two existing methods have provided further support to the proposed scheme.
  •  
5.
  • Ge, Chenjie, 1991, et al. (författare)
  • Cross-Modality Augmentation of Brain Mr Images Using a Novel Pairwise Generative Adversarial Network for Enhanced Glioma Classification
  • 2019
  • Ingår i: Proceedings - International Conference on Image Processing, ICIP. - 1522-4880.
  • Konferensbidrag (refereegranskat)abstract
    • © 2019 IEEE. Brain Magnetic Resonance Images (MRIs) are commonly used for tumor diagnosis. Machine learning for brain tumor characterization often uses MRIs from many modalities (e.g., T1-MRI, Enhanced-T1-MRI, T2-MRI and FLAIR). This paper tackles two issues that may impact brain tumor characterization performance from deep learning: insufficiently large training dataset, and incomplete collection of MRIs from different modalities. We propose a novel pairwise generative adversarial network (GAN) architecture for generating synthetic brain MRIs in missing modalities by using existing MRIs in other modalities. By improving the training dataset, we aim to mitigate the overfitting and improve the deep learning performance. Main contributions of the paper include: (a) propose a pairwise generative adversarial network (GAN) for brain image augmentation via cross-modality image generation; (b) propose a training strategy to enhance the glioma classification performance, where GAN-augmented images are used for pre-training, followed by refined-training using real brain MRIs; (c) demonstrate the proposed method through tests and comparisons of glioma classifiers that are trained from mixing real and GAN synthetic data, as well as from real data only. Experiments were conducted on an open TCGA dataset, containing 167 subjects for classifying IDH genotypes (mutation or wild-type). Test results from two experimental settings have both provided supports to the proposed method, where glioma classification performance has consistently improved by using mixed real and augmented data (test accuracy 81.03%, with 2.57% improvement).
  •  
6.
  • Ge, Chenjie, 1991, et al. (författare)
  • Deep Feature Clustering for Seeking Patterns in Daily Harmonic Variations
  • 2021
  • Ingår i: IEEE Transactions on Instrumentation and Measurement. - : IEEE. - 0018-9456 .- 1557-9662. ; 70
  • Tidskriftsartikel (refereegranskat)abstract
    • This article proposes a novel scheme for analyzing power system measurement data. The main question that we seek answers in this study is on “whether one can find some important patterns that are hidden in the large data of power system measurements such as variational data.” The proposed scheme uses an unsupervised deep feature learning approach by first employing a deep autoencoder (DAE) followed by feature clustering. An analysis is performed by examining the patterns of clusters and reconstructing the representative data sequence for the clustering centers. The scheme is illustrated by applying it to the daily variations of harmonic voltage distortion in a low-voltage network. The main contributions of the article include: 1) providing a new unsupervised deep feature learning approach for seeking possible underlying patterns of power system variation measurements and 2) proposing an effective empirical analysis approach for understanding the measurements through examining the underlying feature clusters and the associated reconstructed data by DAE.
  •  
7.
  • Ge, Chenjie, 1991, et al. (författare)
  • Deep Learning and Multi-Sensor Fusion for Glioma Classification Using Multistream 2D Convolutional Networks
  • 2018
  • Ingår i: Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS. - 1557-170X. ; , s. 5894-5897
  • Konferensbidrag (refereegranskat)abstract
    • This paper addresses issues of brain tumor, glioma, grading from multi-sensor images. Different types of scanners (or sensors) like enhanced T1-MRI, T2-MRI and FLAIR, show different contrast and are sensitive to different brain tissues and fluid regions. Most existing works use 3D brain images from single sensor. In this paper, we propose a novel multistream deep Convolutional Neural Network (CNN) architecture that extracts and fuses the features from multiple sensors for glioma tumor grading/subcategory grading. The main contributions of the paper are: (a) propose a novel multistream deep CNN architecture for glioma grading; (b) apply sensor fusion from T1-MRI, T2-MRI and/or FLAIR for enhancing performance through feature aggregation; (c) mitigate overfitting by using 2D brain image slices in combination with 2D image augmentation. Two datasets were used for our experiments, one for classifying low/high grade gliomas, another for classifying glioma with/without 1p19q codeletion. Experiments using the proposed scheme have shown good results (with test accuracy of 90.87% for former case, and 89.39 % for the latter case). Comparisons with several existing methods have provided further support to the proposed scheme.
  •  
8.
  • Ge, Chenjie, 1991, et al. (författare)
  • Deep semi-supervised learning for brain tumor classification
  • 2020
  • Ingår i: BMC Medical Imaging. - : Springer Science and Business Media LLC. - 1471-2342. ; 20:1
  • Tidskriftsartikel (refereegranskat)abstract
    • Background: This paper addresses issues of brain tumor, glioma, classification from four modalities of Magnetic Resonance Image (MRI) scans (i.e., T1 weighted MRI, T1 weighted MRI with contrast-enhanced, T2 weighted MRI and FLAIR). Currently, many available glioma datasets often contain some unlabeled brain scans, and many datasets are moderate in size. Methods: We propose to exploit deep semi-supervised learning to make full use of the unlabeled data. Deep CNN features were incorporated into a new graph-based semi-supervised learning framework for learning the labels of the unlabeled data, where a new 3D-2D consistent constraint is added to make consistent classifications for the 2D slices from the same 3D brain scan. A deep-learning classifier is then trained to classify different glioma types using both labeled and unlabeled data with estimated labels. To alleviate the overfitting caused by moderate-size datasets, synthetic MRIs generated by Generative Adversarial Networks (GANs) are added in the training of CNNs. Results: The proposed scheme has been tested on two glioma datasets, TCGA dataset for IDH-mutation prediction (molecular-based glioma subtype classification) and MICCAI dataset for glioma grading. Our results have shown good performance (with test accuracies 86.53% on TCGA dataset and 90.70% on MICCAI dataset). Conclusions: The proposed scheme is effective for glioma IDH-mutation prediction and glioma grading, and its performance is comparable to the state-of-the-art.
  •  
9.
  • Ge, Chenjie, 1991, et al. (författare)
  • Enlarged Training Dataset by Pairwise GANs for Molecular-Based Brain Tumor Classification
  • 2020
  • Ingår i: IEEE Access. - 2169-3536 .- 2169-3536. ; 8:1, s. 22560-22570
  • Tidskriftsartikel (refereegranskat)abstract
    • This paper addresses issues of brain tumor subtype classification using Magnetic Resonance Images (MRIs) from different scanner modalities like T1 weighted, T1 weighted with contrast-enhanced, T2 weighted and FLAIR images. Currently most available glioma datasets are relatively moderate in size, and often accompanied with incomplete MRIs in different modalities. To tackle the commonly encountered problems of insufficiently large brain tumor datasets and incomplete modality of image for deep learning, we propose to add augmented brain MR images to enlarge the training dataset by employing a pairwise Generative Adversarial Network (GAN) model. The pairwise GAN is able to generate synthetic MRIs across different modalities. To achieve the patient-level diagnostic result, we propose a post-processing strategy to combine the slice-level glioma subtype classification results by majority voting. A two-stage course-to-fine training strategy is proposed to learn the glioma feature using GAN-augmented MRIs followed by real MRIs. To evaluate the effectiveness of the proposed scheme, experiments have been conducted on a brain tumor dataset for classifying glioma molecular subtypes: isocitrate dehydrogenase 1 (IDH1) mutation and IDH1 wild-type. Our results on the dataset have shown good performance (with test accuracy 88.82%). Comparisons with several state-of-the-art methods are also included.
  •  
10.
  • Ge, Chenjie, 1991, et al. (författare)
  • Human Fall Detection using Co-Saliency-Enhanced Deep Recurrent Convolutional Neural Networks
  • 2019
  • Ingår i: Internationa Research Journal of Engineering and Technology (IRJET). - 2395-0056. ; 6:9, s. 993-1000
  • Tidskriftsartikel (refereegranskat)abstract
    • This paper addresses issues of fall detection from videos for e-healthcare and assisted-living. Instead of using hand-crafted features from videos, we exploit a dedicated recurrent convolutional network (RCN) architecture for fall detection in combination with co-saliency enhancement. In the proposed scheme, the recurrent neural network (RNN) is realized by Long Short-Term Memory (LSTM) connecting to a set of Convolutional Neural Networks (CNNs), where each video is modelled as an ordered sequence, containing several frames. In such a way, the sequential information in video is preserved. To further enhance the performance, we propose to employ co-saliency-enhanced video frames as the inputs of RCN, where salient human activity regions are enhanced. Experimental results have shown that the proposed scheme is effective. Further, our results have shown very good test performance (accuracy 98.12%), and employing the co-saliency-enhanced RCN has led to the improvement in performance (0.70% on test) as comparing to that without co-saliency. Comparisons with two existing methods have provided further support to effectiveness of the proposed scheme.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-10 av 15

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy