SwePub
Sök i SwePub databas

  Utökad sökning

Booleska operatorer måste skrivas med VERSALER

AND är defaultoperator och kan utelämnas

Träfflista för sökning "AMNE:(MEDICAL AND HEALTH SCIENCES Clinical Medicine Cancer and Oncology) ;hsvcat:2"

Sökning: AMNE:(MEDICAL AND HEALTH SCIENCES Clinical Medicine Cancer and Oncology) > Teknik

  • Resultat 1-10 av 130
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Nyholm, Tufve, et al. (författare)
  • A national approach for automated collection of standardized and population-based radiation therapy data in Sweden
  • 2016
  • Ingår i: Radiotherapy and Oncology. - : Elsevier BV. - 0167-8140 .- 1879-0887. ; 119:2, s. 344-350
  • Tidskriftsartikel (refereegranskat)abstract
    • Purpose: To develop an infrastructure for structured and automated collection of interoperable radiation therapy (RT) data into a national clinical quality registry. Materials and methods: The present study was initiated in 2012 with the participation of seven of the 15 hospital departments delivering RT in Sweden. A national RT nomenclature and a database for structured unified storage of RT data at each site (Medical Information Quality Archive, MIQA) have been developed. Aggregated data from the MIQA databases are sent to a national RT registry located on the same IT platform (INCA) as the national clinical cancer registries. Results: The suggested naming convention has to date been integrated into the clinical workflow at 12 of 15 sites, and MIQA is installed at six of these. Involvement of the remaining 3/15 RT departments is ongoing, and they are expected to be part of the infrastructure by 2016. RT data collection from ARIA (R), Mosaiq (R), Eclipse (TM), and Oncentra (R) is supported. Manual curation of RT-structure information is needed for approximately 10% of target volumes, but rarely for normal tissue structures, demonstrating a good compliance to the RT nomenclature. Aggregated dose/volume descriptors are calculated based on the information in MIQA and sent to INCA using a dedicated service (MIQA2INCA). Correct linkage of data for each patient to the clinical cancer registries on the INCA platform is assured by the unique Swedish personal identity number. Conclusions: An infrastructure for structured and automated prospective collection of syntactically inter operable RT data into a national clinical quality registry for RT data is under implementation. Future developments include adapting MIQA to other treatment modalities (e.g. proton therapy and brachytherapy) and finding strategies to harmonize structure delineations. How the RT registry should comply with domain-specific ontologies such as the Radiation Oncology Ontology (ROO) is under discussion.
  •  
2.
  • Ali, Muhaddisa Barat, 1986, et al. (författare)
  • A novel federated deep learning scheme for glioma and its subtype classification
  • 2023
  • Ingår i: Frontiers in Neuroscience. - 1662-4548 .- 1662-453X. ; 17
  • Tidskriftsartikel (refereegranskat)abstract
    • Background: Deep learning (DL) has shown promising results in molecular-based classification of glioma subtypes from MR images. DL requires a large number of training data for achieving good generalization performance. Since brain tumor datasets are usually small in size, combination of such datasets from different hospitals are needed. Data privacy issue from hospitals often poses a constraint on such a practice. Federated learning (FL) has gained much attention lately as it trains a central DL model without requiring data sharing from different hospitals. Method: We propose a novel 3D FL scheme for glioma and its molecular subtype classification. In the scheme, a slice-based DL classifier, EtFedDyn, is exploited which is an extension of FedDyn, with the key differences on using focal loss cost function to tackle severe class imbalances in the datasets, and on multi-stream network to exploit MRIs in different modalities. By combining EtFedDyn with domain mapping as the pre-processing and 3D scan-based post-processing, the proposed scheme makes 3D brain scan-based classification on datasets from different dataset owners. To examine whether the FL scheme could replace the central learning (CL) one, we then compare the classification performance between the proposed FL and the corresponding CL schemes. Furthermore, detailed empirical-based analysis were also conducted to exam the effect of using domain mapping, 3D scan-based post-processing, different cost functions and different FL schemes. Results: Experiments were done on two case studies: classification of glioma subtypes (IDH mutation and wild-type on TCGA and US datasets in case A) and glioma grades (high/low grade glioma HGG and LGG on MICCAI dataset in case B). The proposed FL scheme has obtained good performance on the test sets (85.46%, 75.56%) for IDH subtypes and (89.28%, 90.72%) for glioma LGG/HGG all averaged on five runs. Comparing with the corresponding CL scheme, the drop in test accuracy from the proposed FL scheme is small (−1.17%, −0.83%), indicating its good potential to replace the CL scheme. Furthermore, the empirically tests have shown that an increased classification test accuracy by applying: domain mapping (0.4%, 1.85%) in case A; focal loss function (1.66%, 3.25%) in case A and (1.19%, 1.85%) in case B; 3D post-processing (2.11%, 2.23%) in case A and (1.81%, 2.39%) in case B and EtFedDyn over FedAvg classifier (1.05%, 1.55%) in case A and (1.23%, 1.81%) in case B with fast convergence, which all contributed to the improvement of overall performance in the proposed FL scheme. Conclusion: The proposed FL scheme is shown to be effective in predicting glioma and its subtypes by using MR images from test sets, with great potential of replacing the conventional CL approaches for training deep networks. This could help hospitals to maintain their data privacy, while using a federated trained classifier with nearly similar performance as that from a centrally trained one. Further detailed experiments have shown that different parts in the proposed 3D FL scheme, such as domain mapping (make datasets more uniform) and post-processing (scan-based classification), are essential.
  •  
3.
  • Ge, Chenjie, 1991, et al. (författare)
  • Enlarged Training Dataset by Pairwise GANs for Molecular-Based Brain Tumor Classification
  • 2020
  • Ingår i: IEEE Access. - 2169-3536 .- 2169-3536. ; 8:1, s. 22560-22570
  • Tidskriftsartikel (refereegranskat)abstract
    • This paper addresses issues of brain tumor subtype classification using Magnetic Resonance Images (MRIs) from different scanner modalities like T1 weighted, T1 weighted with contrast-enhanced, T2 weighted and FLAIR images. Currently most available glioma datasets are relatively moderate in size, and often accompanied with incomplete MRIs in different modalities. To tackle the commonly encountered problems of insufficiently large brain tumor datasets and incomplete modality of image for deep learning, we propose to add augmented brain MR images to enlarge the training dataset by employing a pairwise Generative Adversarial Network (GAN) model. The pairwise GAN is able to generate synthetic MRIs across different modalities. To achieve the patient-level diagnostic result, we propose a post-processing strategy to combine the slice-level glioma subtype classification results by majority voting. A two-stage course-to-fine training strategy is proposed to learn the glioma feature using GAN-augmented MRIs followed by real MRIs. To evaluate the effectiveness of the proposed scheme, experiments have been conducted on a brain tumor dataset for classifying glioma molecular subtypes: isocitrate dehydrogenase 1 (IDH1) mutation and IDH1 wild-type. Our results on the dataset have shown good performance (with test accuracy 88.82%). Comparisons with several state-of-the-art methods are also included.
  •  
4.
  • Haj-Hosseini, Neda, 1980-, et al. (författare)
  • Early Detection of Oral Potentially Malignant Disorders: A Review on Prospective Screening Methods with Regard to Global Challenges
  • 2022
  • Ingår i: Journal of Maxillofacial & Oral Surgery. - New Delhi, India : Springer Science and Business Media LLC. - 0972-8279 .- 0974-942X.
  • Tidskriftsartikel (refereegranskat)abstract
    • Oral cancer is a cancer type that is widely prevalent in low-and middle-income countries with a high mortality rate, and poor quality of life for patients after treatment. Early treatment of cancer increases patient survival, improves quality of life and results in less morbidity and a better prognosis. To reach this goal, early detection of malignancies using technologies that can be used in remote and low resource areas is desirable. Such technologies should be affordable, accurate, and easy to use and interpret. This review surveys different technologies that have the potentials of implementation in primary health and general dental practice, considering global perspectives and with a focus on the population in India, where oral cancer is highly prevalent. The technologies reviewed include both sample-based methods, such as saliva and blood analysis and brush biopsy, and more direct screening of the oral cavity including fluorescence, Raman techniques, and optical coherence tomography. Digitalisation, followed by automated artificial intelligence based analysis, are key elements in facilitating wide access to these technologies, to non-specialist personnel and in rural areas, increasing quality and objectivity of the analysis while simultaneously reducing the labour and need for highly trained specialists.
  •  
5.
  • Ge, Chenjie, 1991, et al. (författare)
  • 3D Multi-Scale Convolutional Networks for Glioma Grading Using MR Images
  • 2018
  • Ingår i: Proceedings - International Conference on Image Processing, ICIP. - 1522-4880. - 9781479970612 ; , s. 141-145
  • Konferensbidrag (refereegranskat)abstract
    • This paper addresses issues of grading brain tumor, glioma, from Magnetic Resonance Images (MRIs). Although feature pyramid is shown to be useful to extract multi-scale features for object recognition, it is rarely explored in MRI images for glioma classification/grading. For glioma grading, existing deep learning methods often use convolutional neural networks (CNNs) to extract single-scale features without considering that the scales of brain tumor features vary depending on structure/shape, size, tissue smoothness, and locations. In this paper, we propose to incorporate the multi-scale feature learning into a deep convolutional network architecture, which extracts multi-scale semantic as well as fine features for glioma tumor grading. The main contributions of the paper are: (a) propose a novel 3D multi-scale convolutional network architecture for the dedicated task of glioma grading; (b) propose a novel feature fusion scheme that further refines multi-scale features generated from multi-scale convolutional layers; (c) propose a saliency-aware strategy to enhance tumor regions of MRIs. Experiments were conducted on an open dataset for classifying high/low grade gliomas. Performance on the test set using the proposed scheme has shown good results (with accuracy of 89.47%).
  •  
6.
  • Borrelli, P., et al. (författare)
  • Freely available convolutional neural network-based quantification of PET/CT lesions is associated with survival in patients with lung cancer
  • 2022
  • Ingår i: EJNMMI Physics. - : Springer Science and Business Media LLC. - 2197-7364. ; 9:1
  • Tidskriftsartikel (refereegranskat)abstract
    • Background: Metabolic positron emission tomography/computed tomography (PET/CT) parameters describing tumour activity contain valuable prognostic information, but to perform the measurements manually leads to both intra- and inter-reader variability and is too time-consuming in clinical practice. The use of modern artificial intelligence-based methods offers new possibilities for automated and objective image analysis of PET/CT data. Purpose: We aimed to train a convolutional neural network (CNN) to segment and quantify tumour burden in [18F]-fluorodeoxyglucose (FDG) PET/CT images and to evaluate the association between CNN-based measurements and overall survival (OS) in patients with lung cancer. A secondary aim was to make the method available to other researchers. Methods: A total of 320 consecutive patients referred for FDG PET/CT due to suspected lung cancer were retrospectively selected for this study. Two nuclear medicine specialists manually segmented abnormal FDG uptake in all of the PET/CT studies. One-third of the patients were assigned to a test group. Survival data were collected for this group. The CNN was trained to segment lung tumours and thoracic lymph nodes. Total lesion glycolysis (TLG) was calculated from the CNN-based and manual segmentations. Associations between TLG and OS were investigated using a univariate Cox proportional hazards regression model. Results: The test group comprised 106 patients (median age, 76 years (IQR 61–79); n = 59 female). Both CNN-based TLG (hazard ratio 1.64, 95% confidence interval 1.21–2.21; p = 0.001) and manual TLG (hazard ratio 1.54, 95% confidence interval 1.14–2.07; p = 0.004) estimations were significantly associated with OS. Conclusion: Fully automated CNN-based TLG measurements of PET/CT data showed were significantly associated with OS in patients with lung cancer. This type of measurement may be of value for the management of future patients with lung cancer. The CNN is publicly available for research purposes. © 2022, The Author(s).
  •  
7.
  • Ali, Muhaddisa Barat, 1986, et al. (författare)
  • Multi-stream Convolutional Autoencoder and 2D Generative Adversarial Network for Glioma Classification
  • 2019
  • Ingår i: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). - Cham : Springer International Publishing. - 1611-3349 .- 0302-9743. ; 11678 LNCS, s. 234-245
  • Konferensbidrag (refereegranskat)abstract
    • Diagnosis and timely treatment play an important role in preventing brain tumor growth. Deep learning methods have gained much attention lately. Obtaining a large amount of annotated medical data remains a challenging issue. Furthermore, high dimensional features of brain images could lead to over-fitting. In this paper, we address the above issues. Firstly, we propose an architecture for Generative Adversarial Networks to generate good quality synthetic 2D MRIs from multi-modality MRIs (T1 contrast-enhanced, T2, FLAIR). Secondly, we propose a deep learning scheme based on 3-streams of Convolutional Autoencoders (CAEs) followed by sensor information fusion. The rational behind using CAEs is that it may improve glioma classification performance (as comparing with conventional CNNs), since CAEs offer noise robustness and also efficient feature reduction hence possibly reduce the over-fitting. A two-round training strategy is also applied by pre-training on GAN augmented synthetic MRIs followed by refined-training on original MRIs. Experiments on BraTS 2017 dataset have demonstrated the effectiveness of the proposed scheme (test accuracy 92.04%). Comparison with several exiting schemes has provided further support to the proposed scheme.
  •  
8.
  • Ge, Chenjie, 1991, et al. (författare)
  • Cross-Modality Augmentation of Brain Mr Images Using a Novel Pairwise Generative Adversarial Network for Enhanced Glioma Classification
  • 2019
  • Ingår i: Proceedings - International Conference on Image Processing, ICIP. - 1522-4880.
  • Konferensbidrag (refereegranskat)abstract
    • © 2019 IEEE. Brain Magnetic Resonance Images (MRIs) are commonly used for tumor diagnosis. Machine learning for brain tumor characterization often uses MRIs from many modalities (e.g., T1-MRI, Enhanced-T1-MRI, T2-MRI and FLAIR). This paper tackles two issues that may impact brain tumor characterization performance from deep learning: insufficiently large training dataset, and incomplete collection of MRIs from different modalities. We propose a novel pairwise generative adversarial network (GAN) architecture for generating synthetic brain MRIs in missing modalities by using existing MRIs in other modalities. By improving the training dataset, we aim to mitigate the overfitting and improve the deep learning performance. Main contributions of the paper include: (a) propose a pairwise generative adversarial network (GAN) for brain image augmentation via cross-modality image generation; (b) propose a training strategy to enhance the glioma classification performance, where GAN-augmented images are used for pre-training, followed by refined-training using real brain MRIs; (c) demonstrate the proposed method through tests and comparisons of glioma classifiers that are trained from mixing real and GAN synthetic data, as well as from real data only. Experiments were conducted on an open TCGA dataset, containing 167 subjects for classifying IDH genotypes (mutation or wild-type). Test results from two experimental settings have both provided supports to the proposed method, where glioma classification performance has consistently improved by using mixed real and augmented data (test accuracy 81.03%, with 2.57% improvement).
  •  
9.
  • de Dios, Eddie, et al. (författare)
  • Introduction to Deep Learning in Clinical Neuroscience
  • 2022
  • Ingår i: Acta Neurochirurgica, Supplement. - Cham : Springer International Publishing. - 2197-8395 .- 0065-1419. ; , s. 79-89
  • Bokkapitel (övrigt vetenskapligt/konstnärligt)abstract
    • The use of deep learning (DL) is rapidly increasing in clinical neuroscience. The term denotes models with multiple sequential layers of learning algorithms, architecturally similar to neural networks of the brain. We provide examples of DL in analyzing MRI data and discuss potential applications and methodological caveats. Important aspects are data pre-processing, volumetric segmentation, and specific task-performing DL methods, such as CNNs and AEs. Additionally, GAN-expansion and domain mapping are useful DL techniques for generating artificial data and combining several smaller datasets. We present results of DL-based segmentation and accuracy in predicting glioma subtypes based on MRI features. Dice scores range from 0.77 to 0.89. In mixed glioma cohorts, IDH mutation can be predicted with a sensitivity of 0.98 and specificity of 0.97. Results in test cohorts have shown improvements of 5–7% in accuracy, following GAN-expansion of data and domain mapping of smaller datasets. The provided DL examples are promising, although not yet in clinical practice. DL has demonstrated usefulness in data augmentation and for overcoming data variability. DL methods should be further studied, developed, and validated for broader clinical use. Ultimately, DL models can serve as effective decision support systems, and are especially well-suited for time-consuming, detail-focused, and data-ample tasks.
  •  
10.
  • Borrelli, P., et al. (författare)
  • AI-based detection of lung lesions in F-18 FDG PET-CT from lung cancer patients
  • 2021
  • Ingår i: Ejnmmi Physics. - : Springer Science and Business Media LLC. - 2197-7364. ; 8:1
  • Tidskriftsartikel (refereegranskat)abstract
    • Background[F-18]-fluorodeoxyglucose (FDG) positron emission tomography with computed tomography (PET-CT) is a well-established modality in the work-up of patients with suspected or confirmed diagnosis of lung cancer. Recent research efforts have focused on extracting theragnostic and textural information from manually indicated lung lesions. Both semi-automatic and fully automatic use of artificial intelligence (AI) to localise and classify FDG-avid foci has been demonstrated. To fully harness AI's usefulness, we have developed a method which both automatically detects abnormal lung lesions and calculates the total lesion glycolysis (TLG) on FDG PET-CT.MethodsOne hundred twelve patients (59 females and 53 males) who underwent FDG PET-CT due to suspected or for the management of known lung cancer were studied retrospectively. These patients were divided into a training group (59%; n = 66), a validation group (20.5%; n = 23) and a test group (20.5%; n = 23). A nuclear medicine physician manually segmented abnormal lung lesions with increased FDG-uptake in all PET-CT studies. The AI-based method was trained to segment the lesions based on the manual segmentations. TLG was then calculated from manual and AI-based measurements, respectively and analysed with Bland-Altman plots.ResultsThe AI-tool's performance in detecting lesions had a sensitivity of 90%. One small lesion was missed in two patients, respectively, where both had a larger lesion which was correctly detected. The positive and negative predictive values were 88% and 100%, respectively. The correlation between manual and AI TLG measurements was strong (R-2 = 0.74). Bias was 42 g and 95% limits of agreement ranged from -736 to 819 g. Agreement was particularly high in smaller lesions.ConclusionsThe AI-based method is suitable for the detection of lung lesions and automatic calculation of TLG in small- to medium-sized tumours. In a clinical setting, it will have an added value due to its capability to sort out negative examinations resulting in prioritised and focused care on patients with potentially malignant lesions.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-10 av 130
Typ av publikation
tidskriftsartikel (95)
konferensbidrag (18)
forskningsöversikt (5)
doktorsavhandling (4)
bokkapitel (3)
licentiatavhandling (3)
visa fler...
annan publikation (1)
patent (1)
visa färre...
Typ av innehåll
refereegranskat (111)
övrigt vetenskapligt/konstnärligt (19)
Författare/redaktör
Gu, Irene Yu-Hua, 19 ... (9)
Trägårdh, Elin (8)
Jakola, Asgeir Store (8)
Uhlén, Mathias (6)
Jirström, Karin (6)
Ali, Muhaddisa Barat ... (6)
visa fler...
Nyholm, Tufve (4)
Wang, I (4)
Rydén, Lisa (3)
Tolmachev, Vladimir (3)
Andersson-Engels, St ... (3)
Yang, Jie (3)
Lundeberg, Joakim (3)
Hartman, J (3)
Söderström, Karin (3)
af Klinteberg, C (3)
Svanberg, Katarina (3)
Enejder, Annika, 196 ... (3)
Toma-Daşu, Iuliana (3)
Olsson, Håkan (2)
Zackrisson, Sophia (2)
Pontén, Fredrik (2)
Prinz, Christelle N. (2)
Orlova, Anna, 1960- (2)
Mitran, Bogdan (2)
Borin, Jesper (2)
Borga, Magnus (2)
Svanberg, Sune (2)
Eklund, Anders, 1981 ... (2)
Haghdoost, Siamak (2)
Acs, B (2)
Rantalainen, M (2)
Baldetorp, Bo (2)
Zackrisson, Björn (2)
Oredsson, Stina (2)
Andersson-Engels, S. (2)
Svanberg, S. (2)
Svanberg, K. (2)
Nilsson, Annika, 196 ... (2)
Ståhl, Stefan (2)
Smedby, Örjan, Profe ... (2)
Forssell-Aronsson, E ... (2)
Olofsson Bagge, Roge ... (2)
Marko-Varga, György (2)
Malm, Johan (2)
Lång, Kristina (2)
Olsson, Lars E (2)
Albertsson, Per, 196 ... (2)
Berger, Mitchel S (2)
Widhalm, Georg (2)
visa färre...
Lärosäte
Chalmers tekniska högskola (42)
Lunds universitet (36)
Göteborgs universitet (27)
Kungliga Tekniska Högskolan (24)
Uppsala universitet (23)
Karolinska Institutet (11)
visa fler...
Linköpings universitet (9)
Umeå universitet (8)
Stockholms universitet (4)
Örebro universitet (2)
Blekinge Tekniska Högskola (2)
Luleå tekniska universitet (1)
Högskolan Väst (1)
Malmö universitet (1)
Linnéuniversitetet (1)
RISE (1)
Sveriges Lantbruksuniversitet (1)
visa färre...
Språk
Engelska (130)
Forskningsämne (UKÄ/SCB)
Medicin och hälsovetenskap (130)
Naturvetenskap (35)
Lantbruksvetenskap (1)
Samhällsvetenskap (1)

År

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy