SwePub
Tyck till om SwePub Sök här!
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Smedby Örjan) ;hsvcat:1"

Sökning: WFRF:(Smedby Örjan) > Naturvetenskap

  • Resultat 1-10 av 34
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Xu, Jiangchang, et al. (författare)
  • A review on AI-based medical image computing in head and neck surgery
  • 2022
  • Ingår i: Physics in Medicine and Biology. - : IOP Publishing. - 0031-9155 .- 1361-6560. ; 67:17, s. 17TR01-
  • Forskningsöversikt (refereegranskat)abstract
    • Head and neck surgery is a fine surgical procedure with a complex anatomical space, difficult operation and high risk. Medical image computing (MIC) that enables accurate and reliable preoperative planning is often needed to reduce the operational difficulty of surgery and to improve patient survival. At present, artificial intelligence, especially deep learning, has become an intense focus of research in MIC. In this study, the application of deep learning-based MIC in head and neck surgery is reviewed. Relevant literature was retrieved on the Web of Science database from January 2015 to May 2022, and some papers were selected for review from mainstream journals and conferences, such as IEEE Transactions on Medical Imaging, Medical Image Analysis, Physics in Medicine and Biology, Medical Physics, MICCAI, etc. Among them, 65 references are on automatic segmentation, 15 references on automatic landmark detection, and eight references on automatic registration. In the elaboration of the review, first, an overview of deep learning in MIC is presented. Then, the application of deep learning methods is systematically summarized according to the clinical needs, and generalized into segmentation, landmark detection and registration of head and neck medical images. In segmentation, it is mainly focused on the automatic segmentation of high-risk organs, head and neck tumors, skull structure and teeth, including the analysis of their advantages, differences and shortcomings. In landmark detection, the focus is mainly on the introduction of landmark detection in cephalometric and craniomaxillofacial images, and the analysis of their advantages and disadvantages. In registration, deep learning networks for multimodal image registration of the head and neck are presented. Finally, their shortcomings and future development directions are systematically discussed. The study aims to serve as a reference and guidance for researchers, engineers or doctors engaged in medical image analysis of head and neck surgery.
  •  
2.
  • Astaraki, Mehdi, PhD Student, 1984-, et al. (författare)
  • Early survival prediction in non-small cell lung cancer from PET/CT images using an intra-tumor partitioning method
  • 2019
  • Ingår i: Physica medica (Testo stampato). - : Elsevier BV. - 1120-1797 .- 1724-191X. ; 60, s. 58-65
  • Tidskriftsartikel (refereegranskat)abstract
    • PurposeTo explore prognostic and predictive values of a novel quantitative feature set describing intra-tumor heterogeneity in patients with lung cancer treated with concurrent and sequential chemoradiotherapy.MethodsLongitudinal PET-CT images of 30 patients with non-small cell lung cancer were analysed. To describe tumor cell heterogeneity, the tumors were partitioned into one to ten concentric regions depending on their sizes, and, for each region, the change in average intensity between the two scans was calculated for PET and CT images separately to form the proposed feature set. To validate the prognostic value of the proposed method, radiomics analysis was performed and a combination of the proposed novel feature set and the classic radiomic features was evaluated. A feature selection algorithm was utilized to identify the optimal features, and a linear support vector machine was trained for the task of overall survival prediction in terms of area under the receiver operating characteristic curve (AUROC).ResultsThe proposed novel feature set was found to be prognostic and even outperformed the radiomics approach with a significant difference (AUROCSALoP = 0.90 vs. AUROCradiomic = 0.71) when feature selection was not employed, whereas with feature selection, a combination of the novel feature set and radiomics led to the highest prognostic values.ConclusionA novel feature set designed for capturing intra-tumor heterogeneity was introduced. Judging by their prognostic power, the proposed features have a promising potential for early survival prediction.
  •  
3.
  • Astaraki, Mehdi, PhD Student, 1984-, et al. (författare)
  • A Comparative Study of Radiomics and Deep-Learning Based Methods for Pulmonary Nodule Malignancy Prediction in Low Dose CT Images
  • 2021
  • Ingår i: Frontiers in Oncology. - : Frontiers Media SA. - 2234-943X. ; 11
  • Tidskriftsartikel (refereegranskat)abstract
    • Objectives: Both radiomics and deep learning methods have shown great promise in predicting lesion malignancy in various image-based oncology studies. However, it is still unclear which method to choose for a specific clinical problem given the access to the same amount of training data. In this study, we try to compare the performance of a series of carefully selected conventional radiomics methods, end-to-end deep learning models, and deep-feature based radiomics pipelines for pulmonary nodule malignancy prediction on an open database that consists of 1297 manually delineated lung nodules.Methods: Conventional radiomics analysis was conducted by extracting standard handcrafted features from target nodule images. Several end-to-end deep classifier networks, including VGG, ResNet, DenseNet, and EfficientNet were employed to identify lung nodule malignancy as well. In addition to the baseline implementations, we also investigated the importance of feature selection and class balancing, as well as separating the features learned in the nodule target region and the background/context region. By pooling the radiomics and deep features together in a hybrid feature set, we investigated the compatibility of these two sets with respect to malignancy prediction.Results: The best baseline conventional radiomics model, deep learning model, and deep-feature based radiomics model achieved AUROC values (mean ± standard deviations) of 0.792 ± 0.025, 0.801 ± 0.018, and 0.817 ± 0.032, respectively through 5-fold cross-validation analyses. However, after trying out several optimization techniques, such as feature selection and data balancing, as well as adding context features, the corresponding best radiomics, end-to-end deep learning, and deep-feature based models achieved AUROC values of 0.921 ± 0.010, 0.824 ± 0.021, and 0.936 ± 0.011, respectively. We achieved the best prediction accuracy from the hybrid feature set (AUROC: 0.938 ± 0.010).Conclusion: The end-to-end deep-learning model outperforms conventional radiomics out of the box without much fine-tuning. On the other hand, fine-tuning the models lead to significant improvements in the prediction performance where the conventional and deep-feature based radiomics models achieved comparable results. The hybrid radiomics method seems to be the most promising model for lung nodule malignancy prediction in this comparative study.
  •  
4.
  • Chang, Yongjun, et al. (författare)
  • Effects of preprocessing in slice-level classification of interstitial lung disease based on deep convolutional networks
  • 2018
  • Ingår i: VipIMAGE 2017. - Cham : Springer Netherlands. - 9783319681948 ; , s. 624-629
  • Konferensbidrag (refereegranskat)abstract
    • Several preprocessing methods are applied to the automatic classification of interstitial lung disease (ILD). The proposed methods are used for the inputs to an established convolutional neural network in order to investigate the effect of those preprocessing techniques to slice-level classification accuracy. Experimental results demonstrate that the proposed preprocessing methods and a deep learning approach outperformed the case of the original images input to deep learning without preprocessing.
  •  
5.
  •  
6.
  •  
7.
  • Mahbod, A., et al. (författare)
  • A Two-Stage U-Net Algorithm for Segmentation of Nuclei in H&E-Stained Tissues
  • 2019
  • Ingår i: Digital Pathology. - Cham : Springer Verlag. - 9783030239367 ; , s. 75-82
  • Konferensbidrag (refereegranskat)abstract
    • Nuclei segmentation is an important but challenging task in the analysis of hematoxylin and eosin (H&E)-stained tissue sections. While various segmentation methods have been proposed, machine learning-based algorithms and in particular deep learning-based models have been shown to deliver better segmentation performance. In this work, we propose a novel approach to segment touching nuclei in H&E-stained microscopic images using U-Net-based models in two sequential stages. In the first stage, we perform semantic segmentation using a classification U-Net that separates nuclei from the background. In the second stage, the distance map of each nucleus is created using a regression U-Net. The final instance segmentation masks are then created using a watershed algorithm based on the distance maps. Evaluated on a publicly available dataset containing images from various human organs, the proposed algorithm achieves an average aggregate Jaccard index of 56.87%, outperforming several state-of-the-art algorithms applied on the same dataset.
  •  
8.
  • Mahbod, A., et al. (författare)
  • Breast Cancer Histological Image Classification Using Fine-Tuned Deep Network Fusion
  • 2018
  • Ingår i: 15th International Conference on Image Analysis and Recognition, ICIAR 2018. - Cham : Springer. - 9783319929996 ; , s. 754-762
  • Konferensbidrag (refereegranskat)abstract
    • Breast cancer is the most common cancer type in women worldwide. Histological evaluation of the breast biopsies is a challenging task even for experienced pathologists. In this paper, we propose a fully automatic method to classify breast cancer histological images to four classes, namely normal, benign, in situ carcinoma and invasive carcinoma. The proposed method takes normalized hematoxylin and eosin stained images as input and gives the final prediction by fusing the output of two residual neural networks (ResNet) of different depth. These ResNets were first pre-trained on ImageNet images, and then fine-tuned on breast histological images. We found that our approach outperformed a previous published method by a large margin when applied on the BioImaging 2015 challenge dataset yielding an accuracy of 97.22%. Moreover, the same approach provided an excellent classification performance with an accuracy of 88.50% when applied on the ICIAR 2018 grand challenge dataset using 5-fold cross validation.
  •  
9.
  • Maria Marreiros, Filipe Miguel, 1978-, et al. (författare)
  • GPU-based ray-casting of non-rigid deformations : a comparison between direct and indirect approaches
  • 2014
  • Ingår i: Proceedings of SIGRAD 2014, Visual Computing, June 12-13, 2014, Göteborg, Sweden. - : Linköping University Electronic Press. - 9789175192123 ; , s. 67-74
  • Konferensbidrag (refereegranskat)abstract
    • For ray-casting of non-rigid deformations, the direct approach (as opposed to the traditional indirect approach) does not require the computation of an intermediate volume to be used for the rendering step. The aim of this study was to compare the two approaches in terms of performance (speed) and accuracy (image quality).The direct and the indirect approach were carefully implemented to benefit of the massive GPU parallel power, using CUDA. They were then tested with Computed Tomography (CT) datasets of varying sizes and with a synthetic image, the Marschner-Lobb function.The results show that the direct approach is dependent on the ray sampling steps, number of landmarks and image resolution. The indirect approach is mainly affected by the number of landmarks, if the volume is large enough.These results exclude extreme cases, i.e. if the sampling steps are much smaller than the voxel size and if the image resolution is much higher than the ones used here. For a volume of size 512×512×512, using 100 landmarks and image resolution of 1280×960, the direct method performs better if the ray sampling steps are approximately above 1 voxel. Regarding accuracy, the direct method provides better results for multiple frequencies using the Marschner-Lobb function.The conclusion is that the indirect method is superior in terms of performance, if the sampling along the rays is high, in comparison to the voxel grid, while the direct is superior otherwise. The accuracy analysis seems to point out that the direct method is superior, in particular when the implicit function is used.
  •  
10.
  • Maria Marreiros, Filipe Miguel (författare)
  • Guidance and Visualization for Brain Tumor Surgery
  • 2016
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Image guidance and visualization play an important role in modern surgery to help surgeons perform their surgical procedures. Here, the focus is on neurosurgery applications, in particular brain tumor surgery where a craniotomy (opening of the skull) is performed to access directly the brain region to be treated. In this type of surgery, once the skull is opened the brain can change its shape, and this deformation is known as brain shift. Moreover, the boundaries of many types of tumors are difficult to identify by the naked eye from healthy tissue. The main goal of this work was to study and develop image guidance and visualization methods for tumor surgery in order to overcome the problems faced in this type of surgery.Due to brain shift the magnetic resonance dataset acquired before the operation (preoperatively) no longer corresponds to the anatomy of the patient during the operation (intraoperatively). For this reason, in this work methods were studied and developed to compensate for this deformation. To guide the deformation methods, information of the superficial vessel centerlines of the brain was used. A method for accurate (approximately 1 mm) reconstruction of the vessel centerlines using a multiview camera system was developed. It uses geometrical constraints, relaxation labeling, thin plate spline filtering and finally mean shift to find the correct correspondences between the camera images.A complete non-rigid deformation pipeline was initially proposed and evaluated with an animal model. From these experiments it was observed that although the traditional non-rigid registration methods (in our case coherent point drift) were able to produce satisfactory vessel correspondences between preoperative and intraoperative vessels, in some specific areas the results were suboptimal. For this reason a new method was proposed that combined the coherent point drift and thin plate spline semilandmarks. This combination resulted in an accurate (below 1 mm) non-rigid registration method, evaluated with simulated data where artificial deformations were performed.Besides the non-rigid registration methods, a new rigid registration method to obtain the rigid transformation between the magnetic resonance dataset and the neuronavigation coordinate systems was also developed.Once the rigid transformation and the vessel correspondences are known, the thin plate spline can be used to perform the brain shift deformation. To do so, we have used two approaches: a direct and an indirect. With the direct approach, an image is created that represents the deformed data, and with the indirect approach, a new volume is first constructed and only after that can the deformed image be created. A comparison of these two approaches, implemented for the graphics processing units, in terms of performance and image quality, was performed. The indirect method was superior in terms of performance if the sampling along the ray is high, in comparison to the voxel grid, while the direct was superior otherwise. The image quality analysis seemed to indicate that the direct method is superior.Furthermore, visualization studies were performed to understand how different rendering methods and parameters influence the perception of the spatial position of enclosed objects (typical situation of a tumor enclosed in the brain). To test these methods a new single-monitor-mirror stereoscopic display was constructed. Using this display, stereo images simulating a tumor inside the brain were presented to the users with two rendering methods (illustrative rendering and simple alpha blending) and different levels of opacity. For the simple alpha blending method an optimal opacity level was found, while for the illustrative rendering method all the opacity levels used seemed to perform similarly.In conclusion, this work developed and evaluated 3D reconstruction, registration (rigid and non-rigid) and deformation methods with the purpose of minimizing the brain shift problem. Stereoscopic perception of the spatial position of enclosed objects was also studied using different rendering methods and parameter values.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-10 av 34
Typ av publikation
konferensbidrag (15)
tidskriftsartikel (13)
doktorsavhandling (3)
proceedings (redaktörskap) (1)
forskningsöversikt (1)
licentiatavhandling (1)
visa fler...
visa färre...
Typ av innehåll
refereegranskat (27)
övrigt vetenskapligt/konstnärligt (7)
Författare/redaktör
Smedby, Örjan (15)
Wang, Chunliang, 198 ... (14)
Smedby, Örjan, Profe ... (9)
Smedby, Örjan, 1956- (6)
Smedby, Örjan, Profe ... (4)
Wang, Chunliang (3)
visa fler...
Astaraki, Mehdi, PhD ... (3)
Moreno, Rodrigo, 197 ... (3)
Dahlström, Nils (3)
Kechagias, Stergios (2)
Borga, Magnus (2)
Lundberg, Peter (2)
Persson, Anders (2)
Dahlqvist Leinhard, ... (2)
Toma-Daşu, Iuliana (2)
Almer, Sven (2)
Forsgren, Mikael (2)
Chang, Yongjun (2)
Rossitti, Sandro (2)
Yang, Guang (1)
Wang, Q. (1)
Romu, Thobias (1)
Frimmel, Hans (1)
Axelsson, R (1)
Egger, Jan (1)
Nyholm, Tufve, Profe ... (1)
af Buren, S (1)
Holstensson, M (1)
Blomgren, A (1)
Tran, T (1)
Gustafsson, Torbjörn (1)
Brismar, Torkel (1)
Karlsson, Per (1)
Nilsson, T (1)
Maria Marreiros, Fil ... (1)
Norén, Bengt (1)
Bengtsson, Ewert (1)
Lohr, M (1)
Fransson, Sven Göran (1)
Zakko, Yousuf (1)
Buizza, Giulia (1)
Lazzeroni, Marta (1)
Sparrelid, E (1)
Ynnerman, Anders, Pr ... (1)
Klintström, Eva (1)
Lundström, Claes (1)
Gustafsson, Torbjorn (1)
Nyström, Fredrik (1)
Schaefer, G. (1)
Moreno, Rodrigo (1)
visa färre...
Lärosäte
Linköpings universitet (21)
Kungliga Tekniska Högskolan (19)
Karolinska Institutet (5)
Uppsala universitet (3)
Stockholms universitet (2)
Umeå universitet (1)
Språk
Engelska (34)
Forskningsämne (UKÄ/SCB)
Medicin och hälsovetenskap (16)
Teknik (12)

År

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy