SwePub
Tyck till om SwePub Sök här!
Sök i SwePub databas

  Utökad sökning

Booleska operatorer måste skrivas med VERSALER

AND är defaultoperator och kan utelämnas

Träfflista för sökning "AMNE:(MEDICAL AND HEALTH SCIENCES Clinical Medicine Radiology, Nuclear Medicine and Medical Imaging) ;lar1:(cth)"

Sökning: AMNE:(MEDICAL AND HEALTH SCIENCES Clinical Medicine Radiology, Nuclear Medicine and Medical Imaging) > Chalmers tekniska högskola

  • Resultat 1-10 av 204
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Ali, Muhaddisa Barat, 1986, et al. (författare)
  • A novel federated deep learning scheme for glioma and its subtype classification
  • 2023
  • Ingår i: Frontiers in Neuroscience. - 1662-4548 .- 1662-453X. ; 17
  • Tidskriftsartikel (refereegranskat)abstract
    • Background: Deep learning (DL) has shown promising results in molecular-based classification of glioma subtypes from MR images. DL requires a large number of training data for achieving good generalization performance. Since brain tumor datasets are usually small in size, combination of such datasets from different hospitals are needed. Data privacy issue from hospitals often poses a constraint on such a practice. Federated learning (FL) has gained much attention lately as it trains a central DL model without requiring data sharing from different hospitals. Method: We propose a novel 3D FL scheme for glioma and its molecular subtype classification. In the scheme, a slice-based DL classifier, EtFedDyn, is exploited which is an extension of FedDyn, with the key differences on using focal loss cost function to tackle severe class imbalances in the datasets, and on multi-stream network to exploit MRIs in different modalities. By combining EtFedDyn with domain mapping as the pre-processing and 3D scan-based post-processing, the proposed scheme makes 3D brain scan-based classification on datasets from different dataset owners. To examine whether the FL scheme could replace the central learning (CL) one, we then compare the classification performance between the proposed FL and the corresponding CL schemes. Furthermore, detailed empirical-based analysis were also conducted to exam the effect of using domain mapping, 3D scan-based post-processing, different cost functions and different FL schemes. Results: Experiments were done on two case studies: classification of glioma subtypes (IDH mutation and wild-type on TCGA and US datasets in case A) and glioma grades (high/low grade glioma HGG and LGG on MICCAI dataset in case B). The proposed FL scheme has obtained good performance on the test sets (85.46%, 75.56%) for IDH subtypes and (89.28%, 90.72%) for glioma LGG/HGG all averaged on five runs. Comparing with the corresponding CL scheme, the drop in test accuracy from the proposed FL scheme is small (−1.17%, −0.83%), indicating its good potential to replace the CL scheme. Furthermore, the empirically tests have shown that an increased classification test accuracy by applying: domain mapping (0.4%, 1.85%) in case A; focal loss function (1.66%, 3.25%) in case A and (1.19%, 1.85%) in case B; 3D post-processing (2.11%, 2.23%) in case A and (1.81%, 2.39%) in case B and EtFedDyn over FedAvg classifier (1.05%, 1.55%) in case A and (1.23%, 1.81%) in case B with fast convergence, which all contributed to the improvement of overall performance in the proposed FL scheme. Conclusion: The proposed FL scheme is shown to be effective in predicting glioma and its subtypes by using MR images from test sets, with great potential of replacing the conventional CL approaches for training deep networks. This could help hospitals to maintain their data privacy, while using a federated trained classifier with nearly similar performance as that from a centrally trained one. Further detailed experiments have shown that different parts in the proposed 3D FL scheme, such as domain mapping (make datasets more uniform) and post-processing (scan-based classification), are essential.
  •  
2.
  • Ge, Chenjie, 1991, et al. (författare)
  • Enlarged Training Dataset by Pairwise GANs for Molecular-Based Brain Tumor Classification
  • 2020
  • Ingår i: IEEE Access. - 2169-3536 .- 2169-3536. ; 8:1, s. 22560-22570
  • Tidskriftsartikel (refereegranskat)abstract
    • This paper addresses issues of brain tumor subtype classification using Magnetic Resonance Images (MRIs) from different scanner modalities like T1 weighted, T1 weighted with contrast-enhanced, T2 weighted and FLAIR images. Currently most available glioma datasets are relatively moderate in size, and often accompanied with incomplete MRIs in different modalities. To tackle the commonly encountered problems of insufficiently large brain tumor datasets and incomplete modality of image for deep learning, we propose to add augmented brain MR images to enlarge the training dataset by employing a pairwise Generative Adversarial Network (GAN) model. The pairwise GAN is able to generate synthetic MRIs across different modalities. To achieve the patient-level diagnostic result, we propose a post-processing strategy to combine the slice-level glioma subtype classification results by majority voting. A two-stage course-to-fine training strategy is proposed to learn the glioma feature using GAN-augmented MRIs followed by real MRIs. To evaluate the effectiveness of the proposed scheme, experiments have been conducted on a brain tumor dataset for classifying glioma molecular subtypes: isocitrate dehydrogenase 1 (IDH1) mutation and IDH1 wild-type. Our results on the dataset have shown good performance (with test accuracy 88.82%). Comparisons with several state-of-the-art methods are also included.
  •  
3.
  • Borrelli, P., et al. (författare)
  • AI-based detection of lung lesions in F-18 FDG PET-CT from lung cancer patients
  • 2021
  • Ingår i: Ejnmmi Physics. - : Springer Science and Business Media LLC. - 2197-7364. ; 8:1
  • Tidskriftsartikel (refereegranskat)abstract
    • Background[F-18]-fluorodeoxyglucose (FDG) positron emission tomography with computed tomography (PET-CT) is a well-established modality in the work-up of patients with suspected or confirmed diagnosis of lung cancer. Recent research efforts have focused on extracting theragnostic and textural information from manually indicated lung lesions. Both semi-automatic and fully automatic use of artificial intelligence (AI) to localise and classify FDG-avid foci has been demonstrated. To fully harness AI's usefulness, we have developed a method which both automatically detects abnormal lung lesions and calculates the total lesion glycolysis (TLG) on FDG PET-CT.MethodsOne hundred twelve patients (59 females and 53 males) who underwent FDG PET-CT due to suspected or for the management of known lung cancer were studied retrospectively. These patients were divided into a training group (59%; n = 66), a validation group (20.5%; n = 23) and a test group (20.5%; n = 23). A nuclear medicine physician manually segmented abnormal lung lesions with increased FDG-uptake in all PET-CT studies. The AI-based method was trained to segment the lesions based on the manual segmentations. TLG was then calculated from manual and AI-based measurements, respectively and analysed with Bland-Altman plots.ResultsThe AI-tool's performance in detecting lesions had a sensitivity of 90%. One small lesion was missed in two patients, respectively, where both had a larger lesion which was correctly detected. The positive and negative predictive values were 88% and 100%, respectively. The correlation between manual and AI TLG measurements was strong (R-2 = 0.74). Bias was 42 g and 95% limits of agreement ranged from -736 to 819 g. Agreement was particularly high in smaller lesions.ConclusionsThe AI-based method is suitable for the detection of lung lesions and automatic calculation of TLG in small- to medium-sized tumours. In a clinical setting, it will have an added value due to its capability to sort out negative examinations resulting in prioritised and focused care on patients with potentially malignant lesions.
  •  
4.
  • Pfeiffer, Christoph, 1989, et al. (författare)
  • On-scalp MEG sensor localization using magnetic dipole-like coils: A method for highly accurate co-registration
  • 2020
  • Ingår i: Neuroimage. - : Elsevier BV. - 1053-8119 .- 1095-9572. ; 212
  • Tidskriftsartikel (refereegranskat)abstract
    • Source modelling in magnetoencephalography (MEG) requires precise co-registration of the sensor array and the anatomical structure of the measured individual's head. In conventional MEG, the positions and orientations of the sensors relative to each other are fixed and known beforehand, requiring only localization of the head relative to the sensor array. Since the sensors in on-scalp MEG are positioned on the scalp, locations of the individual sensors depend on the subject's head shape and size. The positions and orientations of on-scalp sensors must therefore be measured a every recording. This can be achieved by inverting conventional head localization, localizing the sensors relative to the head - rather than the other way around. In this study we present a practical method for localizing sensors using magnetic dipole-like coils attached to the subject's head. We implement and evaluate the method in a set of on-scalp MEG recordings using a 7-channel on-scalp MEG system based on high critical temperature superconducting quantum interference devices (high-T-c SQUIDs). The method allows individually localizing the sensor positions, orientations, and responsivities with high accuracy using only a short averaging time (<= 2 mm, < 3 degrees and < 3%, respectively, with 1-s averaging), enabling continuous sensor localization. Calibrating and jointly localizing the sensor array can further improve the accuracy of position and orientation (< 1 mm and < 1 degrees, respectively, with 1-s coil recordings). We demonstrate source localization of on-scalp recorded somatosensory evoked activity based on coregistration with our method. Equivalent current dipole fits of the evoked responses corresponded well (within 4.2 mm) with those based on a commercial, whole-head MEG system.
  •  
5.
  • Hagberg, Eva, et al. (författare)
  • Semi-supervised learning with natural language processing for right ventricle classification in echocardiography—a scalable approach
  • 2022
  • Ingår i: Computers in Biology and Medicine. - : Elsevier BV. - 0010-4825 .- 1879-0534. ; 143
  • Tidskriftsartikel (refereegranskat)abstract
    • We created a deep learning model, trained on text classified by natural language processing (NLP), to assess right ventricular (RV) size and function from echocardiographic images. We included 12,684 examinations with corresponding written reports for text classification. After manual annotation of 1489 reports, we trained an NLP model to classify the remaining 10,651 reports. A view classifier was developed to select the 4-chamber or RV-focused view from an echocardiographic examination (n = 539). The final models were two image classification models trained on the predicted labels from the combined manual annotation and NLP models and the corresponding echocardiographic view to assess RV function (training set n = 11,008) and size (training set n = 9951. The text classifier identified impaired RV function with 99% sensitivity and 98% specificity and RV enlargement with 98% sensitivity and 98% specificity. The view classification model identified the 4-chamber view with 92% accuracy and the RV-focused view with 73% accuracy. The image classification models identified impaired RV function with 93% sensitivity and 72% specificity and an enlarged RV with 80% sensitivity and 85% specificity; agreement with the written reports was substantial (both κ = 0.65). Our findings show that models for automatic image assessment can be trained to classify RV size and function by using model-annotated data from written echocardiography reports. This pipeline for auto-annotation of the echocardiographic images, using a NLP model with medical reports as input, can be used to train an image-assessment model without manual annotation of images and enables fast and inexpensive expansion of the training dataset when needed. © 2022
  •  
6.
  • Grynne, A., et al. (författare)
  • Women's experience of the health information process involving a digital information tool before commencing radiation therapy for breast cancer : a deductive interview study
  • 2023
  • Ingår i: BMC Health Services Research. - : BioMed Central (BMC). - 1472-6963. ; 23:1
  • Tidskriftsartikel (refereegranskat)abstract
    • BACKGROUND: Individuals undergoing radiation therapy for breast cancer frequently request information before, throughout and after the treatment as a means to reduce distress. Nevertheless, the provision of information to meet individuals needs from their level of health literacy is often overlooked. Thus, individuals information needs are often unmet, leading to reports of discontent. Internet and digital information technology has significantly augmented the available information and changed the way in which persons accesses and comprehends information. As health information is no longer explicitly obtained from healthcare professionals, it is essential to examine the sequences of the health information process in general, and in relation to health literacy. This paper reports on qualitative interviews, targeting women diagnosed with breast cancer who were given access to a health information technology tool, Digi-Do, before commencing radiation therapy, during, and after treatment. METHODS: A qualitative research design, inspired by the integrated health literacy model, was chosen to enable critical reflection by the participating women. Semi-structured interviews were conducted with 15 women with access to a digital information tool, named Digi-Do, in addition to receiving standard information (oral and written) before commencing radiation therapy, during, and after treatment. A deductive thematic analysis process was conducted. RESULTS: The results demonstrate how knowledge, competence, and motivation influence women's experience of the health information process. Three main themes were found: Meeting interactive and personal needs by engaging with health information; Critical recognition of sources of information; and Capability to communicate comprehended health information. The findings reflect the women's experience of the four competencies: to access, understand, appraise, and apply, essential elements of the health information process. CONCLUSIONS: We can conclude that there is a need for tailored digital information tools, such as the Digi-Do, to enable iterative access and use of reliable health information before, during and after the radiation therapy process. The Digi-Do can be seen as a valuable complement to the interpersonal communication with health care professionals, facilitating a better understanding, and enabling iterative access and use of reliable health information before, during and after the radiotherapy treatment. This enhances a sense of preparedness before treatment starts.
  •  
7.
  • Sadik, May, 1970, et al. (författare)
  • Artificial Intelligence Increases the Agreement among Physicians Classifying Focal Skeleton/Bone Marrow Uptake in Hodgkin's Lymphoma Patients Staged with F-18 FDG PET/CT-a Retrospective Study
  • 2023
  • Ingår i: Nuclear Medicine and Molecular Imaging. - : Springer Science and Business Media LLC. - 1869-3474 .- 1869-3482. ; 57:2, s. 110-116
  • Tidskriftsartikel (refereegranskat)abstract
    • Purpose Classification of focal skeleton/bone marrow uptake (BMU) can be challenging. The aim is to investigate whether an artificial intelligence-based method (AI), which highlights suspicious focal BMU, increases interobserver agreement among a group of physicians from different hospitals classifying Hodgkin's lymphoma (HL) patients staged with [F-18]FDG PET/CT. Methods Forty-eight patients staged with [F-18]FDG PET/CT at Sahlgenska University Hospital between 2017 and 2018 were reviewed twice, 6 months apart, regarding focal BMU. During the second time review, the 10 physicians also had access to AI-based advice regarding focal BMU. Results Each physician's classifications were pairwise compared with the classifications made by all the other physicians, resulting in 45 unique pairs of comparisons both without and with AI advice. The agreement between the physicians increased significantly when AI advice was available, which was measured as an increase in mean Kappa values from 0.51 (range 0.25-0.80) without AI advice to 0.61 (range 0.19-0.94) with AI advice (p = 0.005). The majority of the physicians agreed with the AI-based method in 40 (83%) of the 48 cases. Conclusion An AI-based method significantly increases interobserver agreement among physicians working at different hospitals by highlighting suspicious focal BMU in HL patients staged with [F-18]FDG PET/CT.
  •  
8.
  • Ge, Chenjie, 1991, et al. (författare)
  • Multiscale Deep Convolutional Networks for Characterization and Detection of Alzheimer's Disease using MR Images
  • 2019
  • Ingår i: Proceedings - International Conference on Image Processing, ICIP. - 1522-4880. ; 2019-September, s. 789-793
  • Konferensbidrag (refereegranskat)abstract
    • This paper addresses the issues of Alzheimer's disease (AD) characterization and detection from Magnetic Resonance Images (MRIs). Many existing AD detection methods use single-scale feature learning from brain scans. In this paper, we propose a multiscale deep learning architecture for learning AD features. The main contributions of the paper include: (a) propose a novel 3D multiscale CNN architecture for the dedicated task of AD detection; (b) propose a feature fusion and enhancement strategy for multiscale features; (c) empirical study on the impact of several settings, including two dataset partitioning approaches, and the use of multiscale and feature enhancement. Experiments were conducted on an open ADNI dataset (1198 brain scans from 337 subjects), test results have shown the effectiveness of the proposed method with test accuracy of 93.53%, 87.24% (best, average) on subject separated dataset, and 99.44%, 98.80% (best, average) on random brain scan-partitioned dataset. Comparison with eight existing methods has provided further support to the proposed method.
  •  
9.
  • Borrelli, P., et al. (författare)
  • Artificial intelligence-aided CT segmentation for body composition analysis: a validation study
  • 2021
  • Ingår i: European Radiology Experimental. - : Springer Science and Business Media LLC. - 2509-9280. ; 5:1
  • Tidskriftsartikel (refereegranskat)abstract
    • BackgroundBody composition is associated with survival outcome in oncological patients, but it is not routinely calculated. Manual segmentation of subcutaneous adipose tissue (SAT) and muscle is time-consuming and therefore limited to a single CT slice. Our goal was to develop an artificial-intelligence (AI)-based method for automated quantification of three-dimensional SAT and muscle volumes from CT images.MethodsEthical approvals from Gothenburg and Lund Universities were obtained. Convolutional neural networks were trained to segment SAT and muscle using manual segmentations on CT images from a training group of 50 patients. The method was applied to a separate test group of 74 cancer patients, who had two CT studies each with a median interval between the studies of 3days. Manual segmentations in a single CT slice were used for comparison. The accuracy was measured as overlap between the automated and manual segmentations.ResultsThe accuracy of the AI method was 0.96 for SAT and 0.94 for muscle. The average differences in volumes were significantly lower than the corresponding differences in areas in a single CT slice: 1.8% versus 5.0% (p <0.001) for SAT and 1.9% versus 3.9% (p < 0.001) for muscle. The 95% confidence intervals for predicted volumes in an individual subject from the corresponding single CT slice areas were in the order of 20%.Conclusions The AI-based tool for quantification of SAT and muscle volumes showed high accuracy and reproducibility and provided a body composition analysis that is more relevant than manual analysis of a single CT slice.
  •  
10.
  • Ying, T. M., et al. (författare)
  • Automated artificial intelligence-based analysis of skeletal muscle volume predicts overall survival after cystectomy for urinary bladder cancer
  • 2021
  • Ingår i: European Radiology Experimental. - : Springer Science and Business Media LLC. - 2509-9280. ; 5:1
  • Tidskriftsartikel (refereegranskat)abstract
    • Background Radical cystectomy for urinary bladder cancer is a procedure associated with a high risk of complications, and poor overall survival (OS) due to both patient and tumour factors. Sarcopenia is one such patient factor. We have developed a fully automated artificial intelligence (AI)-based image analysis tool for segmenting skeletal muscle of the torso and calculating the muscle volume. Methods All patients who have undergone radical cystectomy for urinary bladder cancer 2011-2019 at Sahlgrenska University Hospital, and who had a pre-operative computed tomography of the abdomen within 90 days of surgery were included in the study. All patients CT studies were analysed with the automated AI-based image analysis tool. Clinical data for the patients were retrieved from the Swedish National Register for Urinary Bladder Cancer. Muscle volumes dichotomised by the median for each sex were analysed with Cox regression for OS and logistic regression for 90-day high-grade complications. The study was approved by the Swedish Ethical Review Authority (2020-03985). Results Out of 445 patients who underwent surgery, 299 (67%) had CT studies available for analysis. The automated AI-based tool failed to segment the muscle volume in seven (2%) patients. Cox regression analysis showed an independent significant association with OS (HR 1.62; 95% CI 1.07-2.44; p = 0.022). Logistic regression did not show any association with high-grade complications. Conclusion The fully automated AI-based CT image analysis provides a low-cost and meaningful clinical measure that is an independent biomarker for OS following radical cystectomy.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-10 av 204
Typ av publikation
tidskriftsartikel (156)
konferensbidrag (37)
doktorsavhandling (4)
forskningsöversikt (4)
rapport (1)
bokkapitel (1)
visa fler...
licentiatavhandling (1)
visa färre...
Typ av innehåll
refereegranskat (171)
övrigt vetenskapligt/konstnärligt (33)
Författare/redaktör
Enqvist, Olof, 1981 (47)
Edenbrandt, Lars, 19 ... (29)
Trägårdh, Elin (28)
Ulén, Johannes (26)
Hockings, Paul, 1956 (14)
Dobsicek Trefna, Han ... (11)
visa fler...
Gu, Irene Yu-Hua, 19 ... (10)
Forssell-Aronsson, E ... (9)
Sihver, Lembit, 1962 (9)
Lindegren, Sture, 19 ... (9)
Kahl, Fredrik, 1972 (9)
Edenbrandt, Lars (9)
Borrelli, P. (9)
Kaboteh, R. (9)
Helou, Khalil, 1966 (7)
Jakola, Asgeir Store (7)
Alvén, Jennifer, 198 ... (7)
Kaboteh, Reza (7)
Langen, Britta (7)
Borrelli, Pablo (7)
Poulsen, Mads (7)
Høilund-Carlsen, Pou ... (7)
Rudqvist, Nils (7)
Albertsson, Per, 196 ... (6)
Aneheim, Emma, 1982 (6)
Larsson, Måns, 1989 (6)
Johnsson, Åse (Allan ... (6)
Ge, Chenjie, 1991 (6)
Ulen, J. (6)
Kjölhede, Henrik, 19 ... (5)
Lagerstrand, Kerstin ... (5)
Waterton, John C (5)
Olsson, Lars E (5)
Bäck, Tom, 1964 (5)
Palm, Stig, 1964 (5)
Jensen, Holger (5)
Parris, Toshima Z, 1 ... (5)
Hoilund-Carlsen, Pou ... (5)
Fhager, Andreas, 197 ... (5)
Curto, S. (5)
Olsson, Caroline, 19 ... (4)
Nilsson, Ola, 1957 (4)
Malmberg, Per, 1974 (4)
Isaksson, Mats, 1961 (4)
Lindgren Belal, Sara ... (4)
Persson, Mikael, 195 ... (4)
Spetz, Johan (4)
Edenbrandt, L (4)
Tragardh, E. (4)
de Lazzari, Mattia, ... (4)
visa färre...
Lärosäte
Göteborgs universitet (97)
Lunds universitet (34)
Karolinska Institutet (7)
Umeå universitet (3)
Uppsala universitet (3)
visa fler...
Stockholms universitet (3)
Linköpings universitet (3)
Högskolan Väst (2)
Örebro universitet (2)
Högskolan i Halmstad (1)
Jönköping University (1)
Marie Cederschiöld högskola (1)
Sveriges Lantbruksuniversitet (1)
visa färre...
Språk
Engelska (202)
Svenska (2)
Forskningsämne (UKÄ/SCB)
Medicin och hälsovetenskap (204)
Teknik (93)
Naturvetenskap (79)
Samhällsvetenskap (1)

År

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy