SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Enqvist Olof) "

Sökning: WFRF:(Enqvist Olof)

  • Resultat 1-50 av 95
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Abuhasanein, Suleiman, et al. (författare)
  • A novel model of artificial intelligence based automated image analysis of CT urography to identify bladder cancer in patients investigated for macroscopic hematuria
  • 2024
  • Ingår i: Scandinavian journal of urology. - : Medical Journal Sweden AB. - 2168-1805 .- 2168-1813. ; 59, s. 90-97
  • Tidskriftsartikel (refereegranskat)abstract
    • Objective: To evaluate whether artificial intelligence (AI) based automatic image analysis utilising convolutional neural networks (CNNs) can be used to evaluate computed tomography urography (CTU) for the presence of urinary bladder cancer (UBC) in patients with macroscopic hematuria. Methods: Our study included patients who had undergone evaluation for macroscopic hematuria. A CNN-based AI model was trained and validated on the CTUs included in the study on a dedicated research platform (Recomia.org). Sensitivity and specificity were calculated to assess the performance of the AI model. Cystoscopy findings were used as the reference method. Results: The training cohort comprised a total of 530 patients. Following the optimisation process, we developed the last version of our AI model. Subsequently, we utilised the model in the validation cohort which included an additional 400 patients (including 239 patients with UBC). The AI model had a sensitivity of 0.83 (95% confidence intervals [CI], 0.76-0.89), specificity of 0.76 (95% CI 0.67-0.84), and a negative predictive value (NPV) of 0.97 (95% CI 0.95-0.98). The majority of tumours in the false negative group (n = 24) were solitary (67%) and smaller than 1 cm (50%), with the majority of patients having cTaG1-2 (71%). Conclusions: We developed and tested an AI model for automatic image analysis of CTUs to detect UBC in patients with macroscopic hematuria. This model showed promising results with a high detection rate and excessive NPV. Further developments could lead to a decreased need for invasive investigations and prioritising patients with serious tumours.
  •  
2.
  • Abuhasanein, Suleiman, et al. (författare)
  • A novel model of artificial intelligence based automated image analysis of CT urography to identify bladder cancer in patients investigated for macroscopic hematuria
  • 2024
  • Ingår i: Scandinavian Journal of Urology. - : Medical Journal Sweden AB. - 2168-1805 .- 2168-1813. ; 59, s. 90-97
  • Tidskriftsartikel (refereegranskat)abstract
    • OBJECTIVE: To evaluate whether artificial intelligence (AI) based automatic image analysis utilising convolutional neural networks (CNNs) can be used to evaluate computed tomography urography (CTU) for the presence of urinary bladder cancer (UBC) in patients with macroscopic hematuria. METHODS: Our study included patients who had undergone evaluation for macroscopic hematuria. A CNN-based AI model was trained and validated on the CTUs included in the study on a dedicated research platform (Recomia.org). Sensitivity and specificity were calculated to assess the performance of the AI model. Cystoscopy findings were used as the reference method. RESULTS: The training cohort comprised a total of 530 patients. Following the optimisation process, we developed the last version of our AI model. Subsequently, we utilised the model in the validation cohort which included an additional 400 patients (including 239 patients with UBC). The AI model had a sensitivity of 0.83 (95% confidence intervals [CI], 0.76-0.89), specificity of 0.76 (95% CI 0.67-0.84), and a negative predictive value (NPV) of 0.97 (95% CI 0.95-0.98). The majority of tumours in the false negative group (n = 24) were solitary (67%) and smaller than 1 cm (50%), with the majority of patients having cTaG1-2 (71%). CONCLUSIONS: We developed and tested an AI model for automatic image analysis of CTUs to detect UBC in patients with macroscopic hematuria. This model showed promising results with a high detection rate and excessive NPV. Further developments could lead to a decreased need for invasive investigations and prioritising patients with serious tumours.
  •  
3.
  • Alvén, Jennifer, 1989, et al. (författare)
  • Shape-aware label fusion for multi-atlas frameworks
  • 2019
  • Ingår i: Pattern Recognition Letters. - : Elsevier BV. - 0167-8655. ; 124, s. 109-117
  • Tidskriftsartikel (refereegranskat)abstract
    • Despite of having no explicit shape model, multi-atlas approaches to image segmentation have proved to be a top-performer for several diverse datasets and imaging modalities. In this paper, we show how one can directly incorporate shape regularization into the multi-atlas framework. Unlike traditional multi-atlas methods, our proposed approach does not rely on label fusion on the voxel level. Instead, each registered atlas is viewed as an estimate of the position of a shape model. We evaluate and compare our method on two public benchmarks: (i) the VISCERAL Grand Challenge on multi-organ segmentation of whole-body CT images and (ii) the Hammers brain atlas of MR images for segmenting the hippocampus and the amygdala. For this wide spectrum of both easy and hard segmentation tasks, our experimental quantitative results are on par or better than state-of-the-art. More importantly, we obtain qualitatively better segmentation boundaries, for instance, preserving topology and fine structures.
  •  
4.
  • Alvén, Jennifer, 1989, et al. (författare)
  • Überatlas: Fast and robust registration for multi-atlas segmentation
  • 2016
  • Ingår i: Pattern Recognition Letters. - : Elsevier BV. - 0167-8655. ; 80, s. 249-255
  • Tidskriftsartikel (refereegranskat)abstract
    • Multi-atlas segmentation has become a frequently used tool for medical image segmentation due to its outstanding performance. A computational bottleneck is that all atlas images need to be registered to a new target image. In this paper, we propose an intermediate representation of the whole atlas set – an überatlas – that can be used to speed up the registration process. The representation consists of feature points that are similar and detected consistently throughout the atlas set. A novel feature-based registration method is presented which uses the überatlas to simultaneously and robustly find correspondences and affine transformations to all atlas images. The method is evaluated on 20 CT images of the heart and 30 MR images of the brain with corresponding ground truth. Our approach succeeds in producing better and more robust segmentation results compared to three baseline methods, two intensity-based and one feature-based, and significantly reduces the running times.
  •  
5.
  • Alvén, Jennifer, 1989, et al. (författare)
  • Überatlas: Robust Speed-Up of Feature-Based Registration and Multi-Atlas Segmentation
  • 2015
  • Ingår i: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). - Cham : Springer International Publishing. - 1611-3349 .- 0302-9743. - 9783319196640 ; 9127, s. 92-102
  • Konferensbidrag (refereegranskat)abstract
    • Registration is a key component in multi-atlas approaches to medical image segmentation. Current state of the art uses intensitybased registration methods, but such methods tend to be slow, which sets practical limitations on the size of the atlas set. In this paper, a novel feature-based registration method for affine registration is presented. The algorithm constructs an abstract representation of the entire atlas set, an uberatlas, through clustering of features that are similar and detected consistently through the atlas set. This is done offline. At runtime only the feature clusters are matched to the target image, simultaneously yielding robust correspondences to all atlases in the atlas set from which the affine transformations can be estimated efficiently. The method is evaluated on 20 CT images of the heart and 30 MR images of the brain with corresponding gold standards. Our approach succeeds in producing better and more robust segmentation results compared to two baseline methods, one intensity-based and one feature-based, and significantly reduces the running times.
  •  
6.
  • Ask, Erik, et al. (författare)
  • Optimal Geometric Fitting Under the Truncated L-2-Norm
  • 2013
  • Ingår i: 2013 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). - 1063-6919. ; , s. 1722-1729
  • Konferensbidrag (refereegranskat)abstract
    • This paper is concerned with model fitting in the presence of noise and outliers. Previously it has been shown that the number of outliers can be minimized with polynomial complexity in the number of measurements. This paper improves on these results in two ways. First, it is shown that for a large class of problems, the statistically more desirable truncated L-2-norm can be optimized with the same complexity. Then, with the same methodology, it is shown how to transform multi-model fitting into a purely combinatorial problem-with worst-case complexity that is polynomial in the number of measurements, though exponential in the number of models. We apply our framework to a series of hard registration and stitching problems demonstrating that the approach is not only of theoretical interest. It gives a practical method for simultaneously dealing with measurement noise and large amounts of outliers for fitting problems with low-dimensional models.
  •  
7.
  • Ask, Erik, et al. (författare)
  • Tractable and Reliable Registration of 2D Point Sets
  • 2014
  • Ingår i: Lecture Notes in Computer Science (Computer Vision - ECCV 2014, 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part I). - Cham : Springer International Publishing. - 0302-9743 .- 1611-3349. - 9783319105895 - 9783319105901 ; 8689, s. 393-406
  • Konferensbidrag (refereegranskat)abstract
    • This paper introduces two new methods of registering 2D point sets over rigid transformations when the registration error is based on a robust loss function. In contrast to previous work, our methods are guaranteed to compute the optimal transformation, and at the same time, the worst-case running times are bounded by a low-degree polynomial in the number of correspondences. In practical terms, this means that there is no need to resort to ad-hoc procedures such as random sampling or local descent methods that cannot guarantee the quality of their solutions. We have tested the methods in several different settings, in particular, a thorough evaluation on two benchmarks of microscopic images used for histologic analysis of prostate cancer has been performed. Compared to the state-of-the-art, our results show that the methods are both tractable and reliable despite the presence of a significant amount of outliers.
  •  
8.
  •  
9.
  • Borrelli, P., et al. (författare)
  • AI-based detection of lung lesions in F-18 FDG PET-CT from lung cancer patients
  • 2021
  • Ingår i: Ejnmmi Physics. - : Springer Science and Business Media LLC. - 2197-7364. ; 8:1
  • Tidskriftsartikel (refereegranskat)abstract
    • Background[F-18]-fluorodeoxyglucose (FDG) positron emission tomography with computed tomography (PET-CT) is a well-established modality in the work-up of patients with suspected or confirmed diagnosis of lung cancer. Recent research efforts have focused on extracting theragnostic and textural information from manually indicated lung lesions. Both semi-automatic and fully automatic use of artificial intelligence (AI) to localise and classify FDG-avid foci has been demonstrated. To fully harness AI's usefulness, we have developed a method which both automatically detects abnormal lung lesions and calculates the total lesion glycolysis (TLG) on FDG PET-CT.MethodsOne hundred twelve patients (59 females and 53 males) who underwent FDG PET-CT due to suspected or for the management of known lung cancer were studied retrospectively. These patients were divided into a training group (59%; n = 66), a validation group (20.5%; n = 23) and a test group (20.5%; n = 23). A nuclear medicine physician manually segmented abnormal lung lesions with increased FDG-uptake in all PET-CT studies. The AI-based method was trained to segment the lesions based on the manual segmentations. TLG was then calculated from manual and AI-based measurements, respectively and analysed with Bland-Altman plots.ResultsThe AI-tool's performance in detecting lesions had a sensitivity of 90%. One small lesion was missed in two patients, respectively, where both had a larger lesion which was correctly detected. The positive and negative predictive values were 88% and 100%, respectively. The correlation between manual and AI TLG measurements was strong (R-2 = 0.74). Bias was 42 g and 95% limits of agreement ranged from -736 to 819 g. Agreement was particularly high in smaller lesions.ConclusionsThe AI-based method is suitable for the detection of lung lesions and automatic calculation of TLG in small- to medium-sized tumours. In a clinical setting, it will have an added value due to its capability to sort out negative examinations resulting in prioritised and focused care on patients with potentially malignant lesions.
  •  
10.
  •  
11.
  •  
12.
  • Borrelli, P., et al. (författare)
  • Artificial intelligence-aided CT segmentation for body composition analysis: a validation study
  • 2021
  • Ingår i: European Radiology Experimental. - : Springer Science and Business Media LLC. - 2509-9280. ; 5:1
  • Tidskriftsartikel (refereegranskat)abstract
    • BackgroundBody composition is associated with survival outcome in oncological patients, but it is not routinely calculated. Manual segmentation of subcutaneous adipose tissue (SAT) and muscle is time-consuming and therefore limited to a single CT slice. Our goal was to develop an artificial-intelligence (AI)-based method for automated quantification of three-dimensional SAT and muscle volumes from CT images.MethodsEthical approvals from Gothenburg and Lund Universities were obtained. Convolutional neural networks were trained to segment SAT and muscle using manual segmentations on CT images from a training group of 50 patients. The method was applied to a separate test group of 74 cancer patients, who had two CT studies each with a median interval between the studies of 3days. Manual segmentations in a single CT slice were used for comparison. The accuracy was measured as overlap between the automated and manual segmentations.ResultsThe accuracy of the AI method was 0.96 for SAT and 0.94 for muscle. The average differences in volumes were significantly lower than the corresponding differences in areas in a single CT slice: 1.8% versus 5.0% (p <0.001) for SAT and 1.9% versus 3.9% (p < 0.001) for muscle. The 95% confidence intervals for predicted volumes in an individual subject from the corresponding single CT slice areas were in the order of 20%.Conclusions The AI-based tool for quantification of SAT and muscle volumes showed high accuracy and reproducibility and provided a body composition analysis that is more relevant than manual analysis of a single CT slice.
  •  
13.
  • Borrelli, Pablo, et al. (författare)
  • Artificial intelligence-based detection of lymph node metastases by PET/CT predicts prostate cancer-specific survival
  • 2021
  • Ingår i: Clinical Physiology and Functional Imaging. - : Wiley. - 1475-0961 .- 1475-097X. ; 41:1, s. 62-67
  • Tidskriftsartikel (refereegranskat)abstract
    • Introduction Lymph node metastases are a key prognostic factor in prostate cancer (PCa), but detecting lymph node lesions from PET/CT images is a subjective process resulting in inter-reader variability. Artificial intelligence (AI)-based methods can provide an objective image analysis. We aimed at developing and validating an AI-based tool for detection of lymph node lesions. Methods A group of 399 patients with biopsy-proven PCa who had undergone(18)F-choline PET/CT for staging prior to treatment were used to train (n = 319) and test (n = 80) the AI-based tool. The tool consisted of convolutional neural networks using complete PET/CT scans as inputs. In the test set, the AI-based lymph node detections were compared to those of two independent readers. The association with PCa-specific survival was investigated. Results The AI-based tool detected more lymph node lesions than Reader B (98 vs. 87/117;p = .045) using Reader A as reference. AI-based tool and Reader A showed similar performance (90 vs. 87/111;p = .63) using Reader B as reference. The number of lymph node lesions detected by the AI-based tool, PSA, and curative treatment was significantly associated with PCa-specific survival. Conclusion This study shows the feasibility of using an AI-based tool for automated and objective interpretation of PET/CT images that can provide assessments of lymph node lesions comparable with that of experienced readers and prognostic information in PCa patients.
  •  
14.
  •  
15.
  •  
16.
  • Borrelli, P., et al. (författare)
  • Freely available convolutional neural network-based quantification of PET/CT lesions is associated with survival in patients with lung cancer
  • 2022
  • Ingår i: EJNMMI Physics. - : Springer Science and Business Media LLC. - 2197-7364. ; 9:1
  • Tidskriftsartikel (refereegranskat)abstract
    • Background: Metabolic positron emission tomography/computed tomography (PET/CT) parameters describing tumour activity contain valuable prognostic information, but to perform the measurements manually leads to both intra- and inter-reader variability and is too time-consuming in clinical practice. The use of modern artificial intelligence-based methods offers new possibilities for automated and objective image analysis of PET/CT data. Purpose: We aimed to train a convolutional neural network (CNN) to segment and quantify tumour burden in [18F]-fluorodeoxyglucose (FDG) PET/CT images and to evaluate the association between CNN-based measurements and overall survival (OS) in patients with lung cancer. A secondary aim was to make the method available to other researchers. Methods: A total of 320 consecutive patients referred for FDG PET/CT due to suspected lung cancer were retrospectively selected for this study. Two nuclear medicine specialists manually segmented abnormal FDG uptake in all of the PET/CT studies. One-third of the patients were assigned to a test group. Survival data were collected for this group. The CNN was trained to segment lung tumours and thoracic lymph nodes. Total lesion glycolysis (TLG) was calculated from the CNN-based and manual segmentations. Associations between TLG and OS were investigated using a univariate Cox proportional hazards regression model. Results: The test group comprised 106 patients (median age, 76years (IQR 61–79); n = 59 female). Both CNN-based TLG (hazard ratio 1.64, 95% confidence interval 1.21–2.21; p = 0.001) and manual TLG (hazard ratio 1.54, 95% confidence interval 1.14–2.07; p = 0.004) estimations were significantly associated with OS. Conclusion: Fully automated CNN-based TLG measurements of PET/CT data showed were significantly associated with OS in patients with lung cancer. This type of measurement may be of value for the management of future patients with lung cancer. The CNN is publicly available for research purposes. © 2022, The Author(s).
  •  
17.
  •  
18.
  •  
19.
  • Enqvist, Olof, et al. (författare)
  • A Brute-Force Algorithm for Reconstructing a Scene from Two Projections
  • 2011
  • Ingår i: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2011. - 1063-6919. ; , s. 2961-2968
  • Konferensbidrag (refereegranskat)abstract
    • Is the real problem in finding the relative orientation of two viewpoints the correspondence problem? We argue that this is only one difficulty. Even with known correspondences, popular methods like the eight point algorithm and minimal solvers may break down due to planar scenes or small relative motions. In this paper, we derive a simple, brute-force algorithm which is both robust to outliers and has no such algorithmic degeneracies. Several cost functions are explored including maximizing the consensus set and robust norms like truncated least-squares. Our method is based on parameter search in a four-dimensional space using a new epipolar parametrization. In principle, we do an exhaustive search of parameter space, but the computations are very simple and easily parallelizable, resulting in an efficient method. Further speedups can be obtained by restricting the domain of possible motions to, for example, planar motions or small rotations. Experimental results are given for a variety of scenarios including scenes with a large portion of outliers. Further, we apply our algorithm to 3D motion segmentation where we outperform state-of-the-art on the well-known Hopkins-155 benchmark database.
  •  
20.
  • Enqvist, Olof, et al. (författare)
  • Global Optimization for One-Dimensional Structure and Motion Problems
  • 2010
  • Ingår i: SIAM Journal of Imaging Sciences. - : Society for Industrial & Applied Mathematics (SIAM). - 1936-4954. ; 3:4, s. 1075-1095
  • Tidskriftsartikel (refereegranskat)abstract
    • We study geometric reconstruction problems in one-dimensional retina vision. In such problems, the scene is modeled as a two-dimensional plane, and the camera sensor produces one-dimensional images of the scene. Our main contribution is an efficient method for computing the global optimum to the structure and motion problem with respect to the L-infinity norm of the reprojection errors. One-dimensional cameras have proven useful in several applications, most prominently for autonomous vehicles, where they are used to provide inexpensive and reliable navigational systems. Previous results on one-dimensional vision are limited to the classification and solving of minimal cases, bundle adjustment for finding local optima, and linear algorithms for algebraic cost functions. In contrast, we present an approach for finding globally optimal solutions with respect to the L-infinity norm of the angular reprojection errors. We show how to solve intersection and resection problems as well as the problem of simultaneous localization and mapping (SLAM). The algorithm is robust to use when there are missing data, which means that all points are not necessarily seen in all images. Our approach has been tested on a variety of different scenarios, both real and synthetic. The algorithm shows good performance for intersection and resection and for SLAM with up to five views. For more views the high dimension of the search space tends to give long running times. The experimental section also gives interesting examples showing that for one-dimensional cameras with limited field of view the SLAM problem is often inherently ill-conditioned.
  •  
21.
  • Enqvist, Olof, et al. (författare)
  • Non-Sequential Structure from Motion
  • 2011
  • Konferensbidrag (refereegranskat)abstract
    • Prior work on multi-view structure from motion is dominated by sequential approaches starting from a single two-view reconstruction, then adding new images one by one. In contrast, we propose a non-sequential methodology based on rotational consistency and robust estimation using convex optimization. The resulting system is more robust with respect to (i) unreliable two-view estimations caused by short baselines, (ii) repetitive scenes with locally consistent structures that are not consistent with the global geometry and (iii) loop closing as errors are not propagated in a sequential manner. Both theoretical justifications and experimental comparisons are given to support these claims.
  •  
22.
  • Enqvist, Olof, et al. (författare)
  • Optimal Correspondences from Pairwise Constraints
  • 2009
  • Ingår i: IEEE International Conference on Computer Vision. - 1550-5499. - 9781424444199 ; , s. 1295-1302
  • Konferensbidrag (refereegranskat)abstract
    • Correspondence problems are of great importance in computer vision. They appear as subtasks in many applications such as object recognition, merging partial 3D reconstructions and image alignment. Automatically matching features from appearance only is difficult and errors are frequent. Thus, it is necessary to use geometric consistency to remove incorrect correspondences. Typically heuristic methods like RANSAC or EM-like algorithms are used, but they risk getting trapped in local optima and are in no way guaranteed to find the best solution. This paper illustrates how pairwise constraints in combination with graph methods can be used to efficiently find optimal correspondences. These ideas are implemented on two basic geometric problems, 3D-3D registration and 2D-3D registration. The developed scheme can handle large rates of outliers and cope with multiple hypotheses. Despite the combinatorial explosion, the resulting algorithm which has been extensively evaluated on real data, yields competitive running times compared to state of the art
  •  
23.
  • Enqvist, Olof (författare)
  • Robust Algorithms for Multiple View Geometry: Outliers and Optimality
  • 2011
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • This thesis is concerned with the geometrical parts of computer vision, or more precisely, with the three-dimensional geometry. The overall aim is to extract geometric information from a set of images. Most methods for estimating the geometry of multiple views rely on the existence of robust solvers for a set of basic problems. Such a basic problem can be estimating the relative orientation of two cameras or estimating the position of a camera given a model of the scene. The first part of this thesis presents a number of new algorithms for attacking different instances of such basic problems. Normally methods for these problems consist of two parts. First, interest points are extracted from the images and point-to-point correspondences between images are determined. In the second step these correspondences are used to estimate the geometry. A major difficulty lies in the existence of incorrect correspondences, often called outliers. Not modelling these outliers will result in very poor accuracy. Hence, the algorithms in this thesis are designed to be robust to such outliers. In particular, focus lies on obtaining optimal solutions in presence of outliers. For example, it is shown how optimal solutions can be found in cases when the residuals are quasiconvex functions. Moreover, optimal algorithms for the non-convex problems of calibrated camera pose estimation, registration and relative orientation estimation are presented. The second part of the thesis discusses how the solutions from these basic problems can be combined and refined to form a model of the whole scene. Again, robustness is crucial. In this case robustness with respect to incorrect solutions from the basic problems. In particular, a complete system for estimating muliple view geometry is proposed. Often this problem has been solved in a sequential manner, but recently there has been a large interest in non-sequential methods. The results in this thesis show the advantages of a non-sequential approach based on graph methods as well as convex optimization. The last chapter of the thesis is concerned with structure from motion for the special case of one-dimensional cameras. It is shown how optimal solutions can be obtained using linear programming.
  •  
24.
  • Enqvist, Olof, et al. (författare)
  • Robust Fitting for Multiple View Geometry
  • 2012
  • Ingår i: Lecture Notes in Computer Science (Computer Vision - ECCV 2012, Proceedings of the 12th European Conference on Computer Vision, Florence, Italy, October 7-13, 2012, Part I ). - Berlin, Heidelberg : Springer Berlin Heidelberg. - 1611-3349 .- 0302-9743. - 9783642337178 - 9783642337185 ; 7572, s. 738-751
  • Konferensbidrag (refereegranskat)abstract
    • How hard are geometric vision problems with outliers? We show that for most fitting problems, a solution that minimizes the num- ber of outliers can be found with an algorithm that has polynomial time- complexity in the number of points (independent of the rate of outliers). Further, and perhaps more interestingly, other cost functions such as the truncated L2 -norm can also be handled within the same framework with the same time complexity. We apply our framework to triangulation, relative pose problems and stitching, and give several other examples that fulfill the required condi- tions. Based on efficient polynomial equation solvers, it is experimentally demonstrated that these problems can be solved reliably, in particular for low-dimensional models. Comparisons to standard random sampling solvers are also given.
  •  
25.
  • Enqvist, Olof, et al. (författare)
  • Robust Optimal Pose Estimation
  • 2008
  • Ingår i: Computer Vision – ECCV 2008 (Lecture Notes in Computer Science). - Berlin, Heidelberg : Springer Berlin Heidelberg. - 1611-3349 .- 0302-9743. - 9783540886815 ; 5302, s. 141-153
  • Konferensbidrag (refereegranskat)abstract
    • We study the problem of estimating the position and orientation of a calibrated camera from an image of a known scene. A common problem in camera pose estimation is the existence of false correspondences between image features and modeled 3D points. Existing techniques Such as RANSAC to handle outliers have no guarantee of optimality. In contrast, we work with a natural extension of the L-infinity norm to the outlier case. Using a simple result from classical geometry, we derive necessary conditions for L-infinity optimality and show how to use them in a branch and bound setting to find the optimum and to detect outliers. The algorithm has been evaluated on synthetic as well as real data showing good empirical performance. In addition, for cases with no outliers, we demonstrate shorter execution times than existing optimal algorithms.
  •  
26.
  • Enqvist, Olof, et al. (författare)
  • Tractable Algorithms for Robust Model Estimation
  • 2015
  • Ingår i: International Journal of Computer Vision. - : Springer Science and Business Media LLC. - 1573-1405 .- 0920-5691. ; 112:1, s. 115-129
  • Tidskriftsartikel (refereegranskat)abstract
    • What is the computational complexity of geometric model estimation in the presence of noise and outliers? We show that the number of outliers can be minimized in polynomial time with respect to the number of measurements, although exponential in the model dimension. Moreover, for a large class of problems, we prove that the statistically more desirable truncated L2-norm can be optimized with the same complexity. In a similar vein, it is also shown how to transform a multi-model estimation problem into a purely combinatorial one—with worst-case complexity that is polynomial in the number of measurements but exponential in the number of models. We apply our framework to a series of hard fitting problems. It gives a practical method for simultaneously dealing with measurement noise and large amounts of outliers in the estimation of low-dimensional models. Experimental results and a comparison to random sampling techniques are presented for the applications rigid registration, triangulation and stitching.
  •  
27.
  • Fejne, Frida, 1986, et al. (författare)
  • Multiatlas Segmentation Using Robust Feature-Based Registration
  • 2017
  • Ingår i: , Cloud-Based Benchmarking of Medical Image Analysis. - Cham : Springer International Publishing. - 9783319496429 ; , s. 203-218
  • Bokkapitel (övrigt vetenskapligt/konstnärligt)abstract
    • This paper presents a pipeline which uses a multiatlas approach for multiorgan segmentation in whole-body CT images. In order to obtain accurate registrations between the target and the atlas images, we develop an adapted feature-based method which uses organ-specific features. These features are learnt during an offline preprocessing step, and thus, the algorithm still benefits from the speed of feature-based registration methods. These feature sets are then used to obtain pairwise non-rigid transformations using RANSAC followed by a thin-plate spline refinement or NiftyReg. The fusion of the transferred atlas labels is performed using a random forest classifier, and finally, the segmentation is obtained using graph cuts with a Potts model as interaction term. Our pipeline was evaluated on 20 organs in 10 whole-body CT images at the VISCERAL Anatomy Challenge, in conjunction with the International Symposium on Biomedical Imaging, Brooklyn, New York, in April 2015. It performed best on majority of the organs, with respect to the Dice index.
  •  
28.
  • Fredriksson, Johan, et al. (författare)
  • Efficient algorithms for robust estimation of relative translation
  • 2016
  • Ingår i: Image and Vision Computing. - : Elsevier BV. - 0262-8856. ; 52, s. 114-124
  • Tidskriftsartikel (refereegranskat)abstract
    • One of the key challenges for structure from motion systems in order to make them robust to failure is the ability to handle outliers among the correspondences. In this paper we present two new algorithms that find the optimal solution in the presence of outliers when the camera undergoes a pure translation. The first algorithm has polynomial-time computational complexity, independently of the amount of outliers. The second algorithm does not offer such a theoretical complexity guarantee, but we demonstrate that it is magnitudes faster in practice. No random sampling approaches such as RANSAC are guaranteed to find an optimal solution, while our two methods do. We evaluate and compare the algorithms both on synthetic and real experiments. We also embed the algorithms in a larger system, where we optimize for the rotation angle as well (the rotation axis is measured by other means). The experiments show that for problems with a large amount of outliers, the RANSAC estimates may deteriorate compared to our optimal methods.
  •  
29.
  • Fredriksson, Johan, et al. (författare)
  • Fast and Reliable Two-View Translation Estimation
  • 2014
  • Ingår i: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. - 1063-6919. - 9781479951178 ; , s. 1606-1612
  • Konferensbidrag (refereegranskat)abstract
    • It has long been recognized that one of the fundamental difficulties in the estimation of two-view epipolar geometry is the capability of handling outliers. In this paper, we develop a fast and tractable algorithm that maximizes the number of inliers under the assumption of a purely translating camera. Compared to classical random sampling methods, our approach is guaranteed to compute the optimal solution of a cost function based on reprojection errors and it has better time complexity. The performance is in fact independent of the inlier/outlier ratio of the data. This opens up for a more reliable approach to robust ego-motion estimation. Our basic translation estimator can be embedded into a system that computes the full camera rotation. We demonstrate the applicability in several difficult settings with large amounts of outliers. It turns out to be particularly well-suited for small rotations and rotations around a known axis (which is the case for cellular phones where the gravitation axis can be measured). Experimental results show that compared to standard RANSAC methods based on minimal solvers, our algorithm produces more accurate estimates in the presence of large outlier ratios.
  •  
30.
  • Gålne, Anni, et al. (författare)
  • AI-based quantification of whole-body tumour burden on somatostatin receptor PET/CT
  • 2023
  • Ingår i: European Journal of Hybrid Imaging. - 2510-3636. ; 7:1
  • Tidskriftsartikel (refereegranskat)abstract
    • Background: Segmenting the whole-body somatostatin receptor-expressing tumour volume (SRETVwb) on positron emission tomography/computed tomography (PET/CT) images is highly time-consuming but has shown value as an independent prognostic factor for survival. An automatic method to measure SRETVwb could improve disease status assessment and provide a tool for prognostication. This study aimed to develop an artificial intelligence (AI)-based method to detect and quantify SRETVwb and total lesion somatostatin receptor expression (TLSREwb) from [68Ga]Ga-DOTA-TOC/TATE PET/CT images. Methods: A UNet3D convolutional neural network (CNN) was used to train an AI model with [68Ga]Ga-DOTA-TOC/TATE PET/CT images, where all tumours were manually segmented with a semi-automatic method. The training set consisted of 148 patients, of which 108 had PET-positive tumours. The test group consisted of 30 patients, of which 25 had PET-positive tumours. Two physicians segmented tumours in the test group for comparison with the AI model. Results: There were good correlations between the segmented SRETVwb and TLSREwb by the AI model and the physicians, with Spearman rank correlation coefficients of r = 0.78 and r = 0.73, respectively, for SRETVwb and r = 0.83 and r = 0.81, respectively, for TLSREwb. The sensitivity on a lesion detection level was 80% and 79%, and the positive predictive value was 83% and 84% when comparing the AI model with the two physicians. Conclusion: It was possible to develop an AI model to segment SRETVwb and TLSREwb with high performance. A fully automated method makes quantification of tumour burden achievable and has the potential to be more widely used when assessing PET/CT images.
  •  
31.
  • Jiang, Fangyuan, et al. (författare)
  • A Combinatorial Approach to L1-Matrix Factorization
  • 2015
  • Ingår i: Journal of Mathematical Imaging and Vision. - : Springer Science and Business Media LLC. - 1573-7683 .- 0924-9907. ; 51:3, s. 430-441
  • Tidskriftsartikel (refereegranskat)abstract
    • Recent work on low-rank matrix factorization has focused on the missing data problem and robustness to outliers and therefore the problem has often been studied under the $L_1$-norm. However, due to the non-convexity of the problem, most algorithms are sensitive to initialization and tend to get stuck in a local optimum. In this paper, we present a new theoretical framework aimed at achieving optimal solutions to the factorization problem. We define a set of stationary points to the problem that will normally contain the optimal solution. It may be too time-consuming to check all these points, but we demonstrate on several practical applications that even by just computing a random subset of these stationary points, one can achieve significantly better results than current state of the art. In fact, in our experimental results we empirically observe that our competitors rarely find the optimal solution and that our approach is less sensitive to the existence of multiple local minima.
  •  
32.
  • Jiang, Fangyuan, et al. (författare)
  • Improved Object Detection and Pose Using Part-Based Models
  • 2013
  • Ingår i: Lecture Notes in Computer Science (Image Analysis : 18th Scandinavian Conference, SCIA 2013, Espoo, Finland, June 17-20, 2013. Proceedings). - Berlin, Heidelberg : Springer Berlin Heidelberg. - 1611-3349 .- 0302-9743. - 9783642388859 - 9783642388866 ; 7944, s. 396-407
  • Konferensbidrag (refereegranskat)abstract
    • Automated object detection is perhaps the most central task of computer vision and arguably the most difficult one. This paper extends previous work on part-based models by using accurate geometric models both in the learning phase and at detection. In the learning phase manual annotations are used to reduce perspective distortion before learning the part-based models. That training is performed on rectified images, leads to models which are more specific, reducing the risk of false positives. At the same time a set of representative object poses are learnt. These are used at detection to remove perspective distortion. The method is evaluated on the bus category of the Pascal dataset with promising results.
  •  
33.
  • Kaboteh, Reza, et al. (författare)
  • Convolutional neural network based quantification of choline uptake in PET/CT studies is associated with overall survival in patients with prostate cancer
  • 2017
  • Ingår i: European Journal of Nuclear Medicine and Molecular Imaging. - 1619-7070 .- 1619-7089. ; 44:supplement 2
  • Tidskriftsartikel (refereegranskat)abstract
    • Aim : To develop a convolutional neural network (CNN) based automated method for quantification of 18F-choline uptake in the prostate gland in PET/CT studies and to study the association between this measure, clinical data and overall survival in patients with prostate cancer. Methods : A CNN was trained to segment the prostate gland in CT images using manual segmentations performed by a radiologist in a group of 100 patients, who had undergone 18F-FDG PET/CT. After the training process, the CNN automatically segmented the prostate gland in the CT images and SUV values in the corresponding PET images were automatically analyzed in a separate validation group consisting of 45 patients with biopsy-proven hormone-naïve prostate cancer. All patients had undergone an 18F-choline PET/CT as part of a previous research project. Voxels localized in the prostate gland and having a SUV >2.65 were defined as abnormal, as proposed by Reske S et al. (2006). Automated calculation of the following five PET measurements was performed: maximal SUV within the prostate gland - SUVmax; average SUV for voxels with SUV >2.65 - SUVmean; volume of voxels with SUV >2.65 - VOL; fraction of VOL related to the whole volume of the prostate gland - FRAC; product SUVmean x FRAC defined as Total Lesion Uptake - TLU. The association between the automated PET measurements, age, PSA, Gleason score and overall survival (OS) was evaluated using a univariate Cox proportional hazards regression model. Kaplan-Meier analysis was used to estimate the survival difference (log-rank test). Results : TLU and FRAC were significantly associated with OS in the Cox analysis while the other three PET measurements; age, PSA and Gleason score were not. Kaplan-Meier analysis showed that patients with SUVmax <5.3, SUVmean <3.5 and TLU <1 showed significantly longer survival times than patients with values higher than these thresholds. No significant differences were found when patients were stratified based on the other two PET measurements, PSA or Gleason score. Conclusion : Measurements reflecting 18F-choline PET uptake in the prostate gland obtained using a completely automated method were significantly associated with OS in patients with hormone-naïve prostate cancer. This type of objective quantification of PET/CT studies could be of value also for other PET tracers and other cancers in the future.
  •  
34.
  • Kahl, Fredrik, 1972, et al. (författare)
  • Good Features for Reliable Registration in Multi-Atlas Segmentation
  • 2015
  • Ingår i: CEUR Workshop Proceedings. - 1613-0073. ; 1390:January, s. 12-17
  • Konferensbidrag (refereegranskat)abstract
    • This work presents a method for multi-organ segmentation in whole-body CT images based on a multi-atlas approach. A robust and efficient feature-based registration technique is developed which uses sparse organ specific features that are learnt based on their ability to register different organ types accurately. The best fitted feature points are used in RANSAC to estimate an affine transformation, followed by a thin plate spline refinement. This yields an accurate and reliable nonrigid transformation for each organ, which is independent of initialization and hence does not suffer from the local minima problem. Further, this is accomplished at a fraction of the time required by intensity-based methods. The technique is embedded into a standard multi-atlas framework using label transfer and fusion, followed by a random forest classifier which produces the data term for the final graph cut segmentation. For a majority of the classes our approach outperforms the competitors at the VISCERAL Anatomy Grand Challenge on segmentation at ISBI 2015.
  •  
35.
  • Källén, Hanna, et al. (författare)
  • Tracking and Reconstruction of Vehicles for Accurate Position Estimation
  • 2011
  • Ingår i: Applications in Computer Vision (WACV), 2011 Workshop on. - 9781424494965 - 1424494966 ; , s. 110-117
  • Konferensbidrag (refereegranskat)abstract
    • To improve traffic safety it is important to evaluate the safety of roads and intersections. Today this requires a large amount of manual labor so an automated system using cameras would be very beneficial. We focus on the geometric part of the problem, that is, how to get accurate three-dimensional data from images of a road or an intersection. This is essential in order to correctly identify different events and incidents, for example to estimate when two cars gets dangerously close to each other. The proposed method uses a standard tracker to find corresponding points between frames. Then a RANSAC-type algorithm detects points that are likely to belong to the same vehicle. To fully exploit the fact that vehicles rotate and translate only in the ground plane, the structure from motion is estimated using an optimization approach based on the L-infinity-norm. The same approach also allows for easy setup of the system by estimating the camera orientation relative to the ground plane. Promising results for real-world data are presented.
  •  
36.
  • Lind, Erica, et al. (författare)
  • Automated quantification of reference levels in liver and mediastinum (blood pool) for the Deauville therapy response classification using FDG-PET/CT in lymphoma patients
  • 2017
  • Ingår i: European Journal of Nuclear Medicine and Molecular Imaging. - 1619-7070 .- 1619-7089. ; 44:supplement 2
  • Tidskriftsartikel (refereegranskat)abstract
    • Aim : To develop and validate a convolutional neural network (CNN) based method for automated quantification of reference levels in liver and mediastinum (blood pool) for the Deauville therapy response classification using FDG-PET/CT in lymphoma patients. Methods : CNNs were trained to segment the liver and the mediastinum, defined as the thoracic part of the aorta, in CT images from 81 consecutive lymphoma patients, who had undergone FDG-PET/CT examinations. Trained image readers segmented the liver and aorta manually in each of the CT images and these segmentations together with the CT images were used to train the CNN. After the training process, the CNN method was applied to a separate validation group consisting of six consecutive lymphoma patients (17-82 years, 3 female). First, the liver and mediastinum were automatically segmented in the CT images. Second, voxels in the corresponding FDG-PET images, which were localized in the liver and mediastinum, were selected and the median standard uptake value (SUV) was calculated. The CNN based analysis was compared to corresponding manual segmentations by two experienced radiologists. The Dice index was used to analyse the overlap between the segmentations by the CNN and the two radiologists. A Dice index of 1.00 indicates perfect matching. Results : The mean Dice indices for the comparison between CNN based liver segmentations and those of the two radiologists in the validation group were 0.95 and 0.95. A corresponding comparison between the two radiologists also resulted in a Dice index of 0.95. The mean liver volumes were 1,752ml, 1,757ml and 1,768ml for the CNN and two radiologists, respectively. The median SUV for the liver was on average 1.8 and the differences between median SUV based on CNN and manual segmentations were less or equal to 0.1. The mean Dice indices for the mediastinum were 0.80, 0.83 (CNN vs radiologists) and 0.86 (comparing the two radiologists). The mean mediastinum (aorta) volumes were 147ml, 140ml and 125ml for the CNN and two radiologists, respectively. The median SUV for the mediastinum was on average 1.4 and the differences between median SUV based on CNN and manual segmentations were less or equal to 0.14. Conclusion : A CNN based method for automated quantification of reference levels in liver and mediastinum show good agreement with results obtained by experienced radiologists, who manually segmented the CT images. This is a first and promising step towards a completely objective treatment response evaluation in patients with lymphoma based on FDG-PET/CT.
  •  
37.
  • Lindgren Belal, Sarah, et al. (författare)
  • 3D skeletal uptake of F-18 sodium fluoride in PET/CT images is associated with overall survival in patients with prostate cancer
  • 2017
  • Ingår i: EJNMMI Research. - : Springer Science and Business Media LLC. - 2191-219X. ; 7:1
  • Tidskriftsartikel (refereegranskat)abstract
    • Background: Sodium fluoride (NaF) positron emission tomography combined with computer tomography (PET/CT) has shown to be more sensitive than the whole-body bone scan in the detection of skeletal uptake due to metastases in prostate cancer. We aimed to calculate a 3D index for NaF PET/CT and investigate its correlation to the bone scan index (BSI) and overall survival (OS) in a group of patients with prostate cancer. Methods: NaF PET/CT and bone scans were studied in 48 patients with prostate cancer. Automated segmentation of the thoracic and lumbar spines, sacrum, pelvis, ribs, scapulae, clavicles, and sternum were made in the CT images. Hotspots in the PET images were selected using both a manual and an automated method. The volume of each hotspot localized in the skeleton in the corresponding CT image was calculated. Two PET/CT indices, based on manual (manual PET index) and automatic segmenting using a threshold of SUV 15 (automated PET15 index), were calculated by dividing the sum of all hotspot volumes with the volume of all segmented bones. BSI values were obtained using a software for automated calculations. Results: BSI, manual PET index, and automated PET15 index were all significantly associated with OS and concordance indices were 0.68, 0.69, and 0.70, respectively. The median BSI was 0.39 and patients with a BSI > 0.39 had a significantly shorter median survival time than patients with a BSI 0.53 had a significantly shorter median survival time than patients with a manual PET index 0.11 had a significantly shorter median survival time than patients with an automated PET15 index
  •  
38.
  • Lindgren Belal, Sarah, et al. (författare)
  • Applications of Artificial Intelligence in PSMA PET/CT for Prostate Cancer Imaging
  • 2024
  • Ingår i: Seminars in Nuclear Medicine. - 1558-4623 .- 0001-2998. ; 54:1, s. 141-149
  • Forskningsöversikt (refereegranskat)abstract
    • Prostate-specific membrane antigen (PSMA) positron emission tomography/computed tomography (PET/CT) has emerged as an important imaging technique for prostate cancer. The use of PSMA PET/CT is rapidly increasing, while the number of nuclear medicine physicians and radiologists to interpret these scans is limited. Additionally, there is variability in interpretation among readers. Artificial intelligence techniques, including traditional machine learning and deep learning algorithms, are being used to address these challenges and provide additional insights from the images. The aim of this scoping review was to summarize the available research on the development and applications of AI in PSMA PET/CT for prostate cancer imaging. A systematic literature search was performed in PubMed, Embase and Cinahl according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. A total of 26 publications were included in the synthesis. The included studies focus on different aspects of artificial intelligence in PSMA PET/CT, including detection of primary tumor, local recurrence and metastatic lesions, lesion classification, tumor quantification and prediction/prognostication. Several studies show similar performances of artificial intelligence algorithms compared to human interpretation. Few artificial intelligence tools are approved for use in clinical practice. Major limitations include the lack of external validation and prospective design. Demonstrating the clinical impact and utility of artificial intelligence tools is crucial for their adoption in healthcare settings. To take the next step towards a clinically valuable artificial intelligence tool that provides quantitative data, independent validation studies are needed across institutions and equipment to ensure robustness.
  •  
39.
  • Lindgren Belal, Sarah, et al. (författare)
  • Association of PET index quantifying skeletal uptake in NaF PET/CT images with overall survival in prostate cancer patients
  • 2017
  • Ingår i: Journal of Clinical Oncology. - 0732-183X. ; 35:6 Suppl, s. 178-178
  • Konferensbidrag (refereegranskat)abstract
    • Background: Bone Scan Index (BSI) derived from 2D whole-body bone scans is considered an imaging biomarker of bone metastases burden carrying prognostic information. Sodium fluoride (NaF) PET/CT is more sensitive than bone scan in detecting bone changes due to metastases. We aimed to develop a semi-quantitative PET index similar to the BSI for NaF PET/CT imaging and to study its relationship to BSI and overall survival in patients with prostate cancer.Methods: NaF PET/CT and bone scans were analyzed in 48 patients (aged 53-92 years) with prostate cancer. Thoracic and lumbar spines, sacrum, pelvis, ribs, scapulae, clavicles, and sternum were automatically segmented from the CT images, representing approximately 1/3 of the total skeletal volume. Hotspots in the PET images, within the segmented parts in the CT images, were visually classified and hotspots interpreted as metastases were included in the analysis. The PET index was defined as the quotient obtained as the hotspot volume from the PET images divided by the segmented bone tissue volume from the CT images. BSI was automatically calculated using EXINIboneBSI.Results: The correlation between the PET index and BSI was r2= 0.54. The median BSI was 0.39 (IQR 0.08-2.05). The patients with a BSI ≥ 0.39 had a significantly shorter median survival time than patients with a BSI < 0.39 (2.3 years vs. not reached after 5 years). BSI was significantly associated with overall survival (HR 1.13, 95% CI 1.13 to 1.41; p < 0.001), and the C-index was 0.68. The median PET index was 0.53 (IQR 0.02-2.62). The patients with a PET index ≥ 0.53 had a significantly shorter median survival time than patients with a PET index < 0.53 (2.5 years vs. not reached after 5 years). The PET index was significantly associated with overall survival (HR 1.18, 95% CI 1.01 to 1.30; p < 0.001) and C-index was 0.68.Conclusions: PET index based on NaF PET/CT images was correlated to BSI and significantly associated with overall survival in patients with prostate cancer. Further studies are needed to evaluate the clinical value of this novel 3D PET index as a possible future imaging biomarker.
  •  
40.
  • Lindgren Belal, Sarah, et al. (författare)
  • Deep learning-based evaluation of normal bone marrow activity in 18F-NaF PET/CT in patients with prostate cancer
  • 2020
  • Ingår i: Insights into Imaging. - : Springer Science and Business Media LLC. - 1869-4101. ; 11:Suppl. 1, s. 349-350
  • Konferensbidrag (refereegranskat)abstract
    • Purpose: Bone marrow is the primary site of skeletal metastases in prostate cancer. 18F-sodium fluoride (NaF) can be used to detect malignant activity, but also identifies irrelevant degenerative cortical uptake. Normal radiotracer activity in solely the marrow has yet to be described and could be a first step towards automated tumor burden calculation as SUV thresholds. We aimed to investigate normal activity of 18F-NaF in whole bone and bone marrow in patients with localized prostate cancer.Methods and materials: 18F-NaF PET/CT scans from 87 patients with high-risk prostate cancer from two centers were retrospectively analyzed. All patients had a recent negative or inconclusive bone scan. In the first center, PET scan was acquired 1-1.5 hours after i.v. injection of 4 MBq/kg 18F-NaF on an integrated PET/CT system (Gemini TF, Philips Medical Systems) (53/87). In the second center, scanning was performed 1 hour after i.v. injection of 3 MBq/kg 18F-NaF on an integrated PET/CT system (Discovery ST, GE Healthcare) (34/87). CT scans were obtained in immediate connection to the PET scan. Automated segmentations of vertebrae, pelvis, femora, humeri and sternum were performed in the CT scans using a deep learning-based method. Bone <7 mm from skeletal surfaces was removed to isolate the marrow. SUV was measured within the remaining area in the PET scan.Results: SUVmax and SUVmean in the whole bone and bone marrow of the different regions were presented.Conclusion: We present a deep-learning approach for evaluation of normal radiotracer activity in whole bone and bone marrow. Knowledge about radiotracer uptake in the normal bone prior to cancerous involvement is a necessary first step for subsequent tumor assessment and could be of value in the implementation of future tracers.
  •  
41.
  • Lindgren Belal, Sarah, et al. (författare)
  • Deep learning for segmentation of 49 selected bones in CT scans: First step in automated PET/CT-based 3D quantification of skeletal metastases
  • 2019
  • Ingår i: European Journal of Radiology. - : Elsevier BV. - 0720-048X .- 1872-7727. ; 113, s. 89-95
  • Tidskriftsartikel (refereegranskat)abstract
    • Purpose: The aim of this study was to develop a deep learning-based method for segmentation of bones in CT scans and test its accuracy compared to manual delineation, as a first step in the creation of an automated PET/CT-based method for quantifying skeletal tumour burden. Methods: Convolutional neural networks (CNNs) were trained to segment 49 bones using manual segmentations from 100 CT scans. After training, the CNN-based segmentation method was tested on 46 patients with prostate cancer, who had undergone 18 F-choline-PET/CT and 18 F-NaF PET/CT less than three weeks apart. Bone volumes were calculated from the segmentations. The network's performance was compared with manual segmentations of five bones made by an experienced physician. Accuracy of the spatial overlap between automated CNN-based and manual segmentations of these five bones was assessed using the Sørensen-Dice index (SDI). Reproducibility was evaluated applying the Bland-Altman method. Results: The median (SD) volumes of the five selected bones were by CNN and manual segmentation: Th7 41 (3.8) and 36 (5.1), L3 76 (13) and 75 (9.2), sacrum 284 (40) and 283 (26), 7th rib 33 (3.9) and 31 (4.8), sternum 80 (11) and 72 (9.2), respectively. Median SDIs were 0.86 (Th7), 0.85 (L3), 0.88 (sacrum), 0.84 (7th rib) and 0.83 (sternum). The intraobserver volume difference was less with CNN-based than manual approach: Th7 2% and 14%, L3 7% and 8%, sacrum 1% and 3%, 7th rib 1% and 6%, sternum 3% and 5%, respectively. The average volume difference measured as ratio volume difference/mean volume between the two CNN-based segmentations was 5–6% for the vertebral column and ribs and ≤3% for other bones. Conclusion: The new deep learning-based method for automated segmentation of bones in CT scans provided highly accurate bone volumes in a fast and automated way and, thus, appears to be a valuable first step in the development of a clinical useful processing procedure providing reliable skeletal segmentation as a key part of quantification of skeletal metastases.
  •  
42.
  • Minarik, David, et al. (författare)
  • Denoising of Scintillation Camera Images Using a Deep Convolutional Neural Network: A Monte Carlo Simulation Approach
  • 2020
  • Ingår i: Journal of Nuclear Medicine. - : Society of Nuclear Medicine. - 0161-5505 .- 2159-662X. ; 61:2, s. 298-303
  • Tidskriftsartikel (refereegranskat)abstract
    • Scintillation camera images contain a large amount of Poisson noise. We have investigated whether noise can be removed in whole-body bone scans using convolutional neural networks (CNNs) trained with sets of noisy and noiseless images obtained by Monte Carlo simulation. Methods: Three CNNs were generated using 3 different sets of training images: simulated bone scan images, images of a cylindric phantom with hot and cold spots, and a mix of the first two. Each training set consisted of 40,000 noiseless and noisy image pairs. The CNNs were evaluated with simulated images of a cylindric phantom and simulated bone scan images. The mean squared error between filtered and true images was used as difference metric, and the coefficient of variation was used to estimate noise reduction. The CNNs were compared with gaussian and median filters. A clinical evaluation was performed in which the ability to detect metastases for CNN- and gaussian-filtered bone scans with half the number of counts was compared with standard bone scans. Results: The best CNN reduced the coefficient of variation by, on average, 92%, and the best standard filter reduced the coefficient of variation by 88%. The best CNN gave a mean squared error that was on average 68% and 20% better than the best standard filters, for the cylindric and bone scan images, respectively. The best CNNs for the cylindric phantom and bone scans were the dedicated CNNs. No significant differences in the ability to detect metastases were found between standard, CNN-, and gaussian-filtered bone scans. Conclusion: Noise can be removed efficiently regardless of noise level with little or no resolution loss. The CNN filter enables reducing the scanning time by half and still obtaining good accuracy for bone metastasis assessment.
  •  
43.
  • Molnar, David, et al. (författare)
  • Artificial intelligence based automatic quantification of epicardial adipose tissue suitable for large scale population studies
  • 2021
  • Ingår i: Scientific Reports. - : Springer Science and Business Media LLC. - 2045-2322 .- 2045-2322. ; 11:1
  • Tidskriftsartikel (refereegranskat)abstract
    • To develop a fully automatic model capable of reliably quantifying epicardial adipose tissue (EAT) volumes and attenuation in large scale population studies to investigate their relation to markers of cardiometabolic risk. Non-contrast cardiac CT images from the SCAPIS study were used to train and test a convolutional neural network based model to quantify EAT by: segmenting the pericardium, suppressing noise-induced artifacts in the heart chambers, and, if image sets were incomplete, imputing missing EAT volumes. The model achieved a mean Dice coefficient of 0.90 when tested against expert manual segmentations on 25 image sets. Tested on 1400 image sets, the model successfully segmented 99.4% of the cases. Automatic imputation of missing EAT volumes had an error of less than 3.1% with up to 20% of the slices in image sets missing. The most important predictors of EAT volumes were weight and waist, while EAT attenuation was predicted mainly by EAT volume. A model with excellent performance, capable of fully automatic handling of the most common challenges in large scale EAT quantification has been developed. In studies of the importance of EAT in disease development, the strong co-variation with anthropometric measures needs to be carefully considered.
  •  
44.
  • Mortensen, Mike A., et al. (författare)
  • Artificial intelligence-based versus manual assessment of prostate cancer in the prostate gland: a method comparison study
  • 2019
  • Ingår i: Clinical Physiology and Functional Imaging. - : Wiley. - 1475-0961 .- 1475-097X. ; 39:6, s. 399-406
  • Tidskriftsartikel (refereegranskat)abstract
    • Aim : To test the feasibility of a fully automated artificial intelligence-based method providing PET measures of prostate cancer (PCa). Methods : A convolutional neural network (CNN) was trained for automated measurements in 18F-choline (FCH) PET/CT scans obtained prior to radical prostatectomy (RP) in 45 patients with newly diagnosed PCa. Automated values were obtained for prostate volume, maximal standardized uptake value (SUVmax), mean standardized uptake value of voxels considered abnormal (SUVmean) and volume of abnormal voxels (Volabn). The product SUVmean × Volabn was calculated to reflect total lesion uptake (TLU). Corresponding manual measurements were performed. CNN-estimated data were compared with the weighted surgically removed tissue specimens and manually derived data and related to clinical parameters assuming that 1 g ≈ 1 ml of tissue. Results : The mean (range) weight of the prostate specimens was 44 g (20–109), while CNN-estimated volume was 62 ml (31–108) with a mean difference of 13·5 g or ml (95% CI: 9·78–17·32). The two measures were significantly correlated (r = 0·77, P<0·001). Mean differences (95% CI) between CNN-based and manually derived PET measures of SUVmax, SUVmean, Volabn (ml) and TLU were 0·37 (−0·01 to 0·75), −0·08 (−0·30 to 0·14), 1·40 (−2·26 to 5·06) and 9·61 (−3·95 to 23·17), respectively. PET findings Volabn and TLU correlated with PSA (P<0·05), but not with Gleason score or stage. Conclusion : Automated CNN segmentation provided in seconds volume and simple PET measures similar to manually derived ones. Further studies on automated CNN segmentation with newer tracers such as radiolabelled prostate-specific membrane antigen are warranted.
  •  
45.
  •  
46.
  • Norlén, Alexander, 1988, et al. (författare)
  • Automatic pericardium segmentation and quantification of epicardial fat from computed tomography angiography
  • 2016
  • Ingår i: Journal of Mecial Imaging. - 2329-4302 .- 2329-4310. ; 3:3
  • Tidskriftsartikel (refereegranskat)abstract
    • Recent findings indicate a strong correlation between the risk of future heart disease and the volume ofadipose tissue inside of the pericardium. So far, large-scale studies have been hindered by the fact that manual delin-eation of the pericardium is extremely time-consuming and that existing methods for automatic delineation strugglewith accuracy. In this paper, an efficient and fully automatic approach to pericardium segmentation and epicardial fatvolume estimation is presented, based on a variant of multi-atlas segmentation for spatial initialization and a randomforest classifier for accurate pericardium detection. Experimental validation on a set of 30 manually delineated Com-puter Tomography Angiography (CTA) volumes shows a significant improvement on state-of-the-art in terms of EFVestimation (mean absolute epicardial fat volume difference: 3.8 ml (4.7%), Pearson correlation: 0.99) with run-timessuitable for large-scale studies (52 s). Further, the results compare favorably to inter-observer variability measured on10 volumes.
  •  
47.
  • Olsson, Carl, et al. (författare)
  • A polynomial-time bound for matching and registration with outliers
  • 2008
  • Ingår i: 2008 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, VOLS 1-12. - 1063-6919. ; , s. 3230-3237
  • Konferensbidrag (refereegranskat)abstract
    • We present a framework for computing optimal transformations, aligning one point set to another, in the presence of outliers. Example applications include shape matching and registration (using, for example, similarity, affine or projective transformations) as well as multiview reconstruction problems (triangulation, camera pose etc.). While standard methods like RANSAC essentially use heuristics to cope with outliers, we seek to find the largest possible subset of consistent correspondences and the globally optimal transformation aligning the point sets. Based on theory from computational geometry, we show that this is indeed possible to accomplish in polynomial-time. We develop several algorithms which make efficient use of convex programming. The scheme has been tested and evaluated on both synthetic and real data for several applications.
  •  
48.
  • Olsson, Carl, et al. (författare)
  • Stable Structure from Motion for Unordered Image Collections
  • 2011
  • Ingår i: Lecture Notes in Computer Science (Image Analysis : 17th Scandinavian Conference, SCIA 2011, Ystad, Sweden, May 2011. Proceedings). - Berlin, Heidelberg : Springer Berlin Heidelberg. - 1611-3349 .- 0302-9743. - 9783642212260 - 9783642212277 ; 6688, s. 524-535
  • Konferensbidrag (refereegranskat)abstract
    • We present a non-incremental approach to structure from motion. Our solution is based on robustly computing global rotations from relative geometries and feeding these into the known-rotation framework to create an initial solution for bundle adjustment. To increase robustness we present a new method for constructing reliable point tracks from pairwise matches. We show that our method can be seen as maximizing the reliability of a point track if the quality of the weakest link in the track is used to evaluate reliability. To estimate the final geometry we alternate between bundle adjustment and a robust version of the known-rotation formulation. The ability to compute both structure and camera translations independent of initialization makes our algorithm insensitive to degenerate epipolar geometries. We demonstrate the performance of our system on a number of image collections.
  •  
49.
  •  
50.
  • Palmér, Tobias, et al. (författare)
  • A system for automated tracking of motor components in neurophysiological research
  • 2012
  • Ingår i: Journal of Neuroscience Methods. - : Elsevier BV. - 1872-678X .- 0165-0270. ; 205:2, s. 334-344
  • Tidskriftsartikel (refereegranskat)abstract
    • In the study of motor systems it is often necessary to track the movements of an experimental animal in great detail to allow for interpretation of recorded brain signals corresponding to different control signals. This task becomes increasingly difficult when analyzing complex compound movements in freely moving animals. One example of a complex motor behavior that can be studied in rodents is the skilled reaching test where animals are trained to use their forepaws to grasp small food objects, in many ways similar to human hand use. To fully exploit this model in neurophysiological research it is desirable to describe the kinematics at the level of movements around individual joints in 3D space since this permits analyses of how neuronal control signals relate to complex movement patterns. To this end, we have developed an automated system that estimates the paw pose using an anatomical paw model and recorded video images from six different image planes in rats chronically implanted with recording electrodes in neuronal circuits involved in selection and execution of forelimb movements. The kinematic description provided by the system allowed for a decomposition of reaching movements into a subset of motor components. Interestingly, firing rates of individual neurons were found to be modulated in relation to the actuation of these motor components suggesting that sets of motor primitives may constitute building blocks for the encoding of movement commands in motor circuits. The designed system will, thus, enable a more detailed analytical approach in neurophysiological studies of motor systems.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-50 av 95
Typ av publikation
tidskriftsartikel (52)
konferensbidrag (40)
doktorsavhandling (1)
forskningsöversikt (1)
bokkapitel (1)
Typ av innehåll
refereegranskat (75)
övrigt vetenskapligt/konstnärligt (20)
Författare/redaktör
Enqvist, Olof, 1981 (67)
Ulen, Johannes (38)
Trägårdh, Elin (35)
Edenbrandt, Lars, 19 ... (31)
Enqvist, Olof (28)
Kahl, Fredrik (18)
visa fler...
Edenbrandt, Lars (14)
Sadik, May, 1970 (13)
Kahl, Fredrik, 1972 (12)
Kaboteh, Reza (10)
Kaboteh, R. (10)
Ulen, J. (10)
Edenbrandt, L. (9)
Borrelli, P. (9)
Åström, Karl (8)
Borrelli, Pablo (8)
Hoilund-Carlsen, P. ... (8)
Olsson, Carl (7)
Larsson, Måns, 1989 (7)
Poulsen, Mads (7)
Høilund-Carlsen, Pou ... (7)
Polymeri, Eirini (7)
Kjölhede, Henrik, 19 ... (6)
Alvén, Jennifer, 198 ... (6)
Lindgren Belal, Sara ... (6)
Svärm, Linus (6)
Tragardh, E. (6)
Petersson, Per (5)
Hoilund-Carlsen, Pou ... (5)
Palmér, Tobias (5)
Gerke, Oke (5)
Fredriksson, Johan (5)
Simonsen, Jane Angel (5)
Poulsen, M. H. (5)
Ohlsson, Mattias (4)
Larsson, Viktor (4)
Minarik, David (4)
Johnsson, Åse (Allan ... (4)
Ask, Erik (4)
Nakajima, Kenichi (4)
Fejne, Frida, 1986 (4)
Piri, Reza (4)
Landgren, Matilda (3)
Norlén, Alexander, 1 ... (3)
Oskarsson, Magnus (3)
Gerke, O (3)
Jiang, Fangyuan (3)
Johnsson, Åse (3)
Lind, Erica (3)
Sadik, M. (3)
visa färre...
Lärosäte
Chalmers tekniska högskola (71)
Lunds universitet (56)
Göteborgs universitet (24)
Linköpings universitet (2)
Högskolan i Halmstad (1)
Språk
Engelska (95)
Forskningsämne (UKÄ/SCB)
Medicin och hälsovetenskap (61)
Naturvetenskap (48)
Teknik (46)

År

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy