SwePub
Sök i SwePub databas

  Extended search

Träfflista för sökning "WFRF:(Malmberg Filip) "

Search: WFRF:(Malmberg Filip)

  • Result 1-50 of 105
Sort/group result
   
EnumerationReferenceCoverFind
1.
  •  
2.
  •  
3.
  • Almgren, Karin, et al. (author)
  • Role of fibre-fibre and fibre-matrix adhesion in stress transfer in composites made from resin-impregnated paper sheets.
  • 2009
  • In: International Journal of Adhesion and Adhesives. - : Elsevier BV. - 0143-7496 .- 1879-0127. ; 29:5, s. 551-557
  • Journal article (peer-reviewed)abstract
    • Paper-reinforced plastics are gaining increased interest as packaging materials, where mechanical properties are of great importance. Strength and stress transfer in paper sheets are controlled by fibre-fibre bonds. In paper-reinforced plastics, where the sheet is impregnated with a polymer resin, other stress-transfer mechanisms may be more important. The influence of fibre-fibre bonds on the strength of paper-reinforced plastics was therefore investigated. Paper sheets with different degrees of fibre-fibre bonding were manufactured and used as reinforcement in a polymeric matrix. Image analysis tools were used to verify that the difference in the degree of fibre-fibre bonding had been preserved in the composite materials. Strength and stiffness of the composites were experimentally determined and showed no correlation to the degree of fibre-fibre bonding, in contrast to the behaviour of unimpregnated paper sheets. The degree of fibre-fibre bonding is therefore believed to have little importance in this type of material, where stress is mainly transferred through the fibre-matrix interface.
  •  
4.
  • Andersson, Axel, et al. (author)
  • Cell Segmentation of in situ Transcriptomics Data using Signed Graph Partitioning
  • 2023
  • In: Graph-Based Representations in Pattern Recognition. - Cham : Springer. - 9783031427947 - 9783031427954 ; , s. 139-148
  • Conference paper (peer-reviewed)abstract
    • The locations of different mRNA molecules can be revealed by multiplexed in situ RNA detection. By assigning detected mRNA molecules to individual cells, it is possible to identify many different cell types in parallel. This in turn enables investigation of the spatial cellular architecture in tissue, which is crucial for furthering our understanding of biological processes and diseases. However, cell typing typically depends on the segmentation of cell nuclei, which is often done based on images of a DNA stain, such as DAPI. Limiting cell definition to a nuclear stain makes it fundamentally difficult to determine accurate cell borders, and thereby also difficult to assign mRNA molecules to the correct cell. As such, we have developed a computational tool that segments cells solely based on the local composition of mRNA molecules. First, a small neural network is trained to compute attractive and repulsive edges between pairs of mRNA molecules. The signed graph is then partitioned by a mutex watershed into components corresponding to different cells. We evaluated our method on two publicly available datasets and compared it against the current state-of-the-art and older baselines. We conclude that combining neural networks with combinatorial optimization is a promising approach for cell segmentation of in situ transcriptomics data. The tool is open-source and publicly available for use at https://github.com/wahlby-lab/IS3G.
  •  
5.
  • Andersson, Axel (author)
  • Computational Methods for Image-Based Spatial Transcriptomics
  • 2024
  • Doctoral thesis (other academic/artistic)abstract
    • Why does cancer develop, spread, grow, and lead to mortality? To answer these questions, one must study the fundamental building blocks of all living organisms — cells. Like a well-calibrated manufacturing unit, cells follow precise instructions by gene expression to initiate the synthesis of proteins, the workforces that drive all living biochemical processes.Recently, researchers have developed techniques for imaging the expression of hundreds of unique genes within tissue samples. This information is extremely valuable for understanding the cellular activities behind cancer-related diseases.  These methods, collectively known as image-based spatial transcriptomics (IST) techniques,  use fluorescence microscopy to combinatorically label mRNA species (corresponding to expressed genes) in tissue samples. Here, automatic image analysis is required to locate fluorescence signals and decode the combinatorial code. This process results in large quantities of points, marking the location of expressed genes. These new data formats pose several challenges regarding visualization and automated analysis.This thesis presents several computational methods and applications related to data generated from IST methods. Key contributions include: (i) A decoding method that jointly optimizes the detection and decoding of signals, particularly beneficial in scenarios with low signal-to-noise ratios or densely packed signals;  (ii) a computational method for automatically delineating regions with similar gene compositions — efficient, interactive, and scalable for exploring patterns across different scales;  (iii) a software enabling interactive visualization of millions of gene markers atop Terapixel-sized images (TissUUmaps);  (iv) a tool utilizing signed-graph partitioning for the automatic identification of cells, independent of the complementary nuclear stain;  (v) A fast and analytical expression for a score that quantifies co-localization between spatial points (such as located genes);  (vi) a demonstration that gene expression markers can train deep-learning models to classify tissue morphology.In the final contribution (vii), an IST technique features in a clinical study to spatially map the molecular diversity within tumors from patients with colorectal liver metastases, specifically those exhibiting a desmoplastic growth pattern. The study unveils novel molecular patterns characterizing cellular diversity in the transitional region between healthy liver tissue and the tumor. While a direct answer to the initial questions remains elusive, this study sheds illuminating insights into the growth dynamics of colorectal cancer liver metastases, bringing us closer to understanding the journey from development to mortality in cancer.
  •  
6.
  • Andersson, Axel, et al. (author)
  • Points2Regions : Fast, interactive clustering of imaging-based spatial transcriptomics data
  • Other publication (other academic/artistic)abstract
    • Imaging-based spatial transcriptomics techniques generate image data that, once processed, results in a set of spatial points with categorical labels for different mRNA species. A crucial part of analyzing downstream data involves the analysis of these point patterns. Here, biologically interesting patterns can be explored at different spatial scales. Molecular patterns on a cellular level would correspond to cell types, whereas patterns on a millimeter scale would correspond to tissue-level structures. Often, clustering methods are employed to identify and segment regions with distinct point-patterns. Traditional clustering techniques for such data are constrained by reliance on complementary data or extensive machine learning, limiting their applicability to tasks on a particular scale. This paper introduces 'Points2Regions', a practical tool for clustering spatial points with categorical labels. Its flexible and computationally efficient clustering approach enables pattern discovery across multiple scales, making it a powerful tool for exploratory analysis. Points2Regions has demonstrated efficient performance in various datasets, adeptly defining biologically relevant regions similar to those found by scale-specific methods. As a Python package integrated into TissUUmaps and a Napari plugin, it offers interactive clustering and visualization, significantly enhancing user experience in data exploration. In essence, Points2Regions presents a user-friendly and simple tool for exploratory analysis of spatial points with categorical labels. 
  •  
7.
  • Andersson, Jonas, et al. (author)
  • Exact analysis of One-Warehouse-Multiple-Retailer inventory systems with quantity restricted deliveries
  • 2023
  • In: European Journal of Operational Research. - : Elsevier BV. - 0377-2217. ; 309:3, s. 1161-1172
  • Journal article (peer-reviewed)abstract
    • Joint consideration and coordination of inventory and shipment decisions is an important challenge in the quest for sustainable distribution systems. This paper addresses this issue by presenting a method for exact analysis of the inventory level distributions in a centralized One-Warehouse-Multiple-Retailer (OWMR) inventory system with quantity restricted deliveries from the warehouse. The system is characterized by continuous review (R,nQ) policies, complete backordering, and compound Poisson demand. The class of quantity restricted delivery policies we consider is motivated by a common desire in practice to ship full load carriers. It generalizes the assumption of unrestricted partial deliveries often used in the literature, and allows the warehouse to only ship retailer specific delivery batch quantities to the different retailers. The delivery batches typically represent package or pallet sizes, containers, or full truckloads. An attractive feature of our approach is that it does not add to the computational effort of analyzing the special case of unrestricted partial deliveries available in the existing literature. A numerical study shows that accounting for quantitative delivery restrictions used in transportation and handling operations when optimizing the reorder points in multi-echelon systems can be very important. Failure to do so can lead to high backorder costs and inadequate customer service.
  •  
8.
  •  
9.
  • Ayyalasomayajula, Kalyan Ram, 1980-, et al. (author)
  • CalligraphyNet: Augmenting handwriting generation with quill based stroke width
  • 2019
  • Other publication (other academic/artistic)abstract
    • Realistic handwritten document generation garners a lot ofinterest from the document research community for its abilityto generate annotated data. In the current approach we haveused GAN-based stroke width enrichment and style transferbased refinement over generated data which result in realisticlooking handwritten document images. The GAN part of dataaugmentation transfers the stroke variation introduced by awriting instrument onto images rendered from trajectories cre-ated by tracking coordinates along the stylus movement. Thecoordinates from stylus movement are augmented with thelearned stroke width variations during the data augmentationblock. An RNN model is then trained to learn the variationalong the movement of the stylus along with the stroke varia-tions corresponding to an input sequence of characters. Thismodel is then used to generate images of words or sentencesgiven an input character string. A document image thus cre-ated is used as a mask to transfer the style variations of the inkand the parchment. The generated image can capture the colorcontent of the ink and parchment useful for creating annotated data.
  •  
10.
  • Ayyalasomayajula, Kalyan Ram, 1980- (author)
  • Learning based segmentation and generation methods for handwritten document images
  • 2019
  • Doctoral thesis (other academic/artistic)abstract
    • Computerized analysis of handwritten documents is an active research area in image analysis and computer vision. The goal is to create tools that can be available for use at university libraries and for researchers in the humanities. Working with large collections of handwritten documents is very time consuming and many old books and letters remain unread for centuries. Efficient computerized methods could help researchers in history, philology and computer linguistics to cost-effectively conduct a whole new type of research based on large collections of documents. The thesis makes a contribution to this area through the development of methods based on machine learning. The passage of time degrades historical documents. Humidity, stains, heat, mold and natural aging of the materials for hundreds of years make the documents increasingly difficult to interpret. The first half of the dissertation is therefore focused on cleaning the visual information in these documents by image segmentation methods based on energy minimization and machine learning. However, machine learning algorithms learn by imitating what is expected of them. One prerequisite for these methods to work is that ground truth is available. This causes a problem for historical documents because there is a shortage of experts who can help to interpret and interpret them. The second part of the thesis is therefore about automatically creating synthetic documents that are similar to handwritten historical documents. Because they are generated from a known text, they have a given facet. The visual content of the generated historical documents includes variation in the writing style and also imitates degradation factors to make the images realistic. When machine learning is trained on synthetic images of handwritten text, with a known facet, in many cases they can even give an even better result for real historical documents.
  •  
11.
  •  
12.
  • Blache, Ludovic, et al. (author)
  • SoftCut: : A Virtual Planning Tool for Soft Tissue Resection on CT Images
  • 2018
  • In: Medical Image Understanding and Analysis. - Cham : Springer. - 9783319959207 ; , s. 299-310
  • Conference paper (peer-reviewed)abstract
    • With the increasing use of three-dimensional (3D) models and Computer Aided Design (CAD) in the medical domain, virtual surgical planning is now frequently used. Most of the current solutions focus on bone surgical operations. However, for head and neck oncologic resection, soft tissue ablation and reconstruction are common operations. In this paper, we propose a method to provide a fast and efficient estimation of shape and dimensions of soft tissue resections. Our approach takes advantage of a simple sketch-based interface which allows the user to paint the contour of the resection on a patient specific 3D model reconstructed from a computed tomography (CT) scan. The volume is then virtually cut and carved following this pattern. From the outline of the resection defined on the skin surface as a closed curve, we can identify which areas of the skin are inside or outside this shape. We then use distance transforms to identify the soft tissue voxels which are closer from the inside of this shape. Thus, we can propagate the shape of the resection inside the soft tissue layers of the volume. We demonstrate the usefulness of the method on patient specific CT data.
  •  
13.
  •  
14.
  • Breznik, Eva (author)
  • Image Processing and Analysis Methods for Biomedical Applications
  • 2023
  • Doctoral thesis (other academic/artistic)abstract
    • With new technologies and developments medical images can be acquired more quickly and at a larger scale than ever before. However, increased amount of data induces an overhead in the human labour needed for its inspection and analysis. To support clinicians in decision making and enable swifter examinations, computerized methods can be utilized to automate the more time-consuming tasks. For such use, methods need be highly accurate, fast, reliable and interpretable. In this thesis we develop and improve methods for image segmentation, retrieval and statistical analysis, with applications in imaging-based diagnostic pipelines. Individual objects often need to first be extracted/segmented from the image before they can be analysed further. We propose methodological improvements for deep learning-based segmentation methods using distance maps, with the focus on fully-supervised 3D patch-based training and training on 2D slices under point supervision. We show that using a directly interpretable distance prior helps to improve segmentation accuracy and training stability.For histological data in particular, we propose and extensively evaluate a contrastive learning and bag of words-based pipeline for cross-modal image retrieval. The method is able to recover correct matches from the database across modalities and small transformations with improved accuracy compared to the competitors. In addition, we examine a number of methods for multiplicity correction on statistical analyses of correlation using medical images. Evaluation strategies are discussed and anatomy-observing extensions to the methods are developed as a way of directly decreasing the multiplicity issue in an interpretable manner, providing improvements in error control. The methods presented in this thesis were developed with clinical applications in mind and provide a strong base for further developments and future use in medical practice.
  •  
15.
  •  
16.
  • Breznik, Eva, et al. (author)
  • Multiple comparison correction methods for whole-body magnetic resonance imaging
  • 2020
  • In: Journal of Medical Imaging. - : SPIE-Intl Soc Optical Eng. - 2329-4302 .- 2329-4310. ; 7:1
  • Journal article (peer-reviewed)abstract
    • Purpose: Voxel-level hypothesis testing on images suffers from test multiplicity. Numerous correction methods exist, mainly applied and evaluated on neuroimaging and synthetic datasets. However, newly developed approaches like Imiomics, using different data and less common analysis types, also require multiplicity correction for more reliable inference. To handle the multiple comparisons in Imiomics, we aim to evaluate correction methods on whole-body MRI and correlation analyses, and to develop techniques specifically suited for the given analyses. Approach: We evaluate the most common familywise error rate (FWER) limiting procedures on whole-body correlation analyses via standard (synthetic no-activation) nominal error rate estimation as well as smaller prior-knowledge based stringency analysis. Their performance is compared to our anatomy-based method extensions. Results: Results show that nonparametric methods behave better for the given analyses. The proposed prior-knowledge based evaluation shows that the devised extensions including anatomical priors can achieve the same power while keeping the FWER closer to the desired rate. Conclusions: Permutation-based approaches perform adequately and can be used within Imiomics. They can be improved by including information on image structure. We expect such method extensions to become even more relevant with new applications and larger datasets.
  •  
17.
  • Christersson, Albert, et al. (author)
  • Comparison of 2D radiography and a semi-automatic CT-based 3D method for measuring change in dorsal angulation over time in distal radius fractures
  • 2016
  • In: Skeletal Radiology. - : Springer Science and Business Media LLC. - 0364-2348 .- 1432-2161. ; 45:6, s. 763-769
  • Journal article (peer-reviewed)abstract
    • Objective The aim of the present study was to compare the reliability and agreement between a computer tomography-based method (CT) and digitalised 2D radiographs (XR) when measuring change in dorsal angulation over time in distal radius fractures. Materials and methods Radiographs from 33 distal radius fractures treated with external fixation were retrospectively analysed. All fractures had been examined using both XR and CT at six times over 6 months postoperatively. The changes in dorsal angulation between the first reference images and the following examinations in every patient were calculated from 133 follow-up measurements by two assessors and repeated at two different time points. The measurements were analysed using Bland-Altman plots, comparing intra- and inter-observer agreement within and between XR and CT. Results The mean differences in intra- and inter-observer measurements for XR, CT, and between XR and CT were close to zero, implying equal validity. The average intra- and inter-observer limits of agreement for XR, CT, and between XR and CT were +/- 4.4 degrees, +/- 1.9 degrees and +/- 6.8 degrees respectively. Conclusions For scientific purpose, the reliability of XR seems unacceptably low when measuring changes in dorsal angulation in distal radius fractures, whereas the reliability for the semi-automatic CT-based method was higher and is therefore preferable when a more precise method is requested.
  •  
18.
  •  
19.
  • Discrete Geometry and Mathematical Morphology : First International Joint Conference, DGMM 2021, Uppsala, Sweden, May 24–27, 2021, Proceedings
  • 2021
  • Editorial collection (peer-reviewed)abstract
    • This book constitutes the proceedings of the First IAPR International Conference on Discrete Geometry and Mathematical Morphology, DGMM 2021, which was held during May 24-27, 2021, in Uppsala, Sweden.The conference was created by joining the International Conference on Discrete Geometry for computer Imagery, DGCI, with the International Symposium on Mathematical Morphology, ISMM.The 36 papers included in this volume were carefully reviewed and selected from 59 submissions. They were organized in topical sections as follows: applications in image processing, computer vision, and pattern recognition; discrete and combinatorial topology; discrete geometry - models, transforms, visualization; discrete tomography and inverse problems; hierarchical and graph-based models, analysis and segmentation; learning-based approaches to mathematical morphology; multivariate and PDE-based mathematical morphology, morphological filtering.The book also contains 3 invited keynote papers.
  •  
20.
  • Ekström, Simon, 1991-, et al. (author)
  • Deformable Image Registration of Volumetric Whole-body MRI: An Evaluation
  • Other publication (other academic/artistic)abstract
    • Whole-body imaging presents a variety of interesting applications and combining these information rich images with image registration enables detailed large scale analysis. Whole-body image registration, with the large variability present in human anatomy, introduces a range of challenges that need to be dealt with. This paper aims to present two new extensions to a previously published registration method based on compositive updates and voxel-wise regularization. The new extensions are evaluated against a previously presented pipeline for whole-body registration and a learning-based approach using the Voxel Morph framework. The methods are evaluated on Dice overlap, smoothness of produced displacement fields, and the inverse consistency error. The presented extensions are shown to improve upon previous method both in terms of computation time and registration quality. The voxel-wise regularization produces a mean Dice overlap of 0.828 for the 10 segmented regions and a mean computation time of 320 seconds per subject. The learning-based approach had an inference time of only 3 seconds but a training time of 16 hours per reference subject. This approach produced a mean Dice overlap of only 0.797 but it was shown that the issues in overlap score were limited to the kidneys. In conclusion, both the extensions and VoxelMorph has presented great promise for the task of whole-body registration compared to previous method. However, the choice of method will be highly dependent upon the task. VoxelMorph provides results of lower quality and reduced flexibility but a computation time of only a few seconds.
  •  
21.
  • Ekström, Simon, 1991- (author)
  • Efficient GPU-based Image Registration : for Detailed Large-Scale Whole-body Analysis
  • 2020
  • Doctoral thesis (other academic/artistic)abstract
    • Imaging has become an important aspect of medicine, enabling visualization of internals in a non-invasive manner. The rapid advancement and adoption of imaging techniques have led to a demand for tools able to take advantage of the information that is produced. Medical image analysis aims to extract relevant information from acquired images to aid diagnostics in healthcare and increase the understanding within medical research. The main subject of this thesis, image registration, is a widely used tool in image analysis that can be employed to find a spatial transformation aligning a set of images. One application, that is described in detail in this thesis, is the use of image registration for large-scale analysis of whole-body images through the utilization of the correspondences defined by the resulting transformations. To produce detailed results, the correspondences, i.e. transformations, need to be of high resolution and the quality of the result has a direct impact on the quality of the analysis. Also, this type of application aims to analyze large cohorts and the value of a registration method is not only weighted by its ability to produce an accurate result but also by its efficiency. This thesis presents two contributions on the subject; a new method for efficient image registration with the ability to produce dense deformable transformations, and the application of the presented method in large-scale analysis of a whole-body dataset acquired using an integrated positron emission tomography (PET) and magnetic resonance imaging (MRI) system. In this thesis, it is shown that efficient and detailed image registration can be performed by employing graph cuts and a heuristic where the optimization is performed on subregions of the image. The performance can be improved further by the efficient utilization of a graphics processing unit (GPU). It is also shown that the method can be employed to produce a model on health based on a PET-MRI dataset which can be utilized to automatically detect pathology in the imaging.
  •  
22.
  • Ekström, Simon, 1991-, et al. (author)
  • Fast graph-cut based optimization for practical dense deformable registration of volume images
  • 2020
  • In: Computerized Medical Imaging and Graphics. - : Elsevier. - 0895-6111 .- 1879-0771. ; 84
  • Journal article (peer-reviewed)abstract
    • Deformable image registration is a fundamental problem in medical image analysis, with applications such as longitudinal studies, population modeling, and atlas-based image segmentation. Registration is often phrased as an optimization problem, i.e., finding a deformation field that is optimal according to a given objective function. Discrete, combinatorial, optimization techniques have successfully been employed to solve the resulting optimization problem. Specifically, optimization based on α-expansion with minimal graph cuts has been proposed as a powerful tool for image registration. The high computational cost of the graph-cut based optimization approach, however, limits the utility of this approach for registration of large volume images. Here, we propose to accelerate graph-cut based deformable registration by dividing the image into overlapping sub-regions and restricting the α-expansion moves to a single sub-region at a time. We demonstrate empirically that this approach can achieve a large reduction in computation time - from days to minutes - with only a small penalty in terms of solution quality. The reduction in computation time provided by the proposed method makes graph-cut based deformable registration viable for large volume images. Graph-cut based image registration has previously been shown to produce excellent results, but the high computational cost has hindered the adoption of the method for registration of large medical volume images. Our proposed method lifts this restriction, requiring only a small fraction of the computational cost to produce results of comparable quality.
  •  
23.
  • Ekström, Simon, 1991-, et al. (author)
  • Faster dense deformable image registration by utilizing both CPU and GPU
  • Other publication (other academic/artistic)abstract
    • Purpose: Image registration is an important aspect of medical image analysis and a key component in many analysis concepts. Applications include fusion of multimodal images, multi-atlas segmentation, and whole-body analysis. Deformable image registration is often computationally expensive, and the need for efficient registration methods is highlighted by the emergence of large-scale image databases, e.g., the UK Biobank, providing imaging from 100 000 participants.Approach: We present a heterogeneous computing approach, utilizing both the CPU and the GPU, to accelerate a previously proposed image registration method. The parallelizable task of computing the matching criterion is offloaded to the GPU, where it can be computed efficiently, while the more complex optimization task is performed on the CPU. To lessen the impact of data synchronization between the CPU and GPU we propose a pipeline model, effectively overlapping computational tasks with data synchronization. The performance is evaluated on a brain labeling task and compared with a CPU implementation of the same method and the popular Advanced Normalization Tools (ANTs) software.Results: The proposed method presents a speed-up by a factor of 4 and 8 against the CPU implementation and the ANTs software respectively. A significant improvement in labeling quality was also observed, with measured mean Dice overlaps of 0.712 and 0.701 for our method and ANTs respectively.Conclusions: We showed that the proposed method compares favorably to the ANTs software yielding both a significant speed-up and an improvement in labeling quality. The registration method together with the proposed parallelization strategy is implemented as an open-source software package, deform.
  •  
24.
  • Ekström, Simon, 1991-, et al. (author)
  • Faster dense deformable image registration by utilizing both CPU and GPU
  • 2021
  • In: Journal of Medical Imaging. - 2329-4302 .- 2329-4310. ; 8:1
  • Journal article (peer-reviewed)abstract
    • Purpose: Image registration is an important aspect of medical image analysis and a key component in many analysis concepts. Applications include fusion of multimodal images, multi-atlas segmentation, and whole-body analysis. Deformable image registration is often computationally expensive, and the need for efficient registration methods is highlighted by the emergence of large-scale image databases, e.g., the UK Biobank, providing imaging from 100,000 participants. Approach: We present a heterogeneous computing approach, utilizing both the CPU and the graphics processing unit (GPU), to accelerate a previously proposed image registration method. The parallelizable task of computing the matching criterion is offloaded to the GPU, where it can be computed efficiently, while the more complex optimization task is performed on the CPU. To lessen the impact of data synchronization between the CPU and GPU, we propose a pipeline model, effectively overlapping computational tasks with data synchronization. The performance is evaluated on a brain labeling task and compared with a CPU implementation of the same method and the popular advanced normalization tools (ANTs) software. Results: The proposed method presents a speed-up by factors of 4 and 8 against the CPU implementation and the ANTs software, respectively. A significant improvement in labeling quality was also observed, with measured mean Dice overlaps of 0.712 and 0.701 for our method and ANTs, respectively. Conclusions: We showed that the proposed method compares favorably to the ANTs software yielding both a significant speed-up and an improvement in labeling quality. The registration method together with the proposed parallelization strategy is implemented as an open-source software package, deform.
  •  
25.
  • Fors Connolly, Filip, 1981-, et al. (author)
  • Adjustment of daily activities to restrictions and reported spread of the COVID-19 pandemic across Europe
  • 2021
  • Reports (other academic/artistic)abstract
    • This paper addresses adjustments of daily activities in the wake of the COVID-19 pandemic among people aged 50 years and older in Europe, and investigates the extent to which such adjustments are associated with the stringency of governmental restrictions and the overall spread of COVID-19. We use data from the SHARE Corona Survey collected during summer2020, published data on government response stringency, and reported country-specific prevalence and mortality of COVID-19. Our analyses show that older Europeans across the continent have reduced their daily activities quite substantially during the pandemic. However, we observe variation across countries and demographic groups, which may be important to highlight for policymakers. Our explanatory analysis replicates previous studies using mobility data, showing that both restrictions and infections predict a reduction in mobility. Thus, policymakers could potentially rely on both restrictions and voluntary adjustments in order to decrease the spread of the virus. However, it is noteworthy that we find relatively weaker associations with restrictions compared to previous studies using mobility data. One explanation for this discrepancy could be that our study focuses on older people, who face a higher risk of becoming severely ill and therefore have stronger incentives to adjust their behaviours independent of governmental regulations.
  •  
26.
  • Gifford, Aliya, et al. (author)
  • Canine body composition quantification using 3 tesla fat–water MRI
  • 2014
  • In: Journal of Magnetic Resonance Imaging. - : Wiley. - 1053-1807 .- 1522-2586. ; 39:2, s. 485-491
  • Journal article (peer-reviewed)abstract
    • PurposeTo test the hypothesis that a whole-body fat–water MRI (FWMRI) protocol acquired at 3 Tesla combined with semi-automated image analysis techniques enables precise volume and mass quantification of adipose, lean, and bone tissue depots that agree with static scale mass and scale mass changes in the context of a longitudinal study of large-breed dogs placed on an obesogenic high-fat, high-fructose diet.Materials and MethodsSix healthy adult male dogs were scanned twice, at weeks 0 (baseline) and 4, of the dietary regiment. FWMRI-derived volumes of adipose tissue (total, visceral, and subcutaneous), lean tissue, and cortical bone were quantified using a semi-automated approach. Volumes were converted to masses using published tissue densities.ResultsFWMRI-derived total mass corresponds with scale mass with a concordance correlation coefficient of 0.931 (95% confidence interval = [0.813, 0.975]), and slope and intercept values of 1.12 and −2.23 kg, respectively. Visceral, subcutaneous and total adipose tissue masses increased significantly from weeks 0 to 4, while neither cortical bone nor lean tissue masses changed significantly. This is evidenced by a mean percent change of 70.2% for visceral, 67.0% for subcutaneous, and 67.1% for total adipose tissue.ConclusionFWMRI can precisely quantify and map body composition with respect to adipose, lean, and bone tissue depots. The described approach provides a valuable tool to examine the role of distinct tissue depots in an established animal model of human metabolic disease.
  •  
27.
  • Guglielmo, Priscilla, et al. (author)
  • Validation of automated whole-body analysis of metabolic and morphological parameters from an integrated FDG-PET/MRI acquisition
  • 2020
  • In: Scientific Reports. - : Springer Science and Business Media LLC. - 2045-2322. ; 10:1
  • Journal article (peer-reviewed)abstract
    • Automated quantification of tissue morphology and tracer uptake in PET/MR images could streamline the analysis compared to traditional manual methods. To validate a single atlas image segmentation approach for automated assessment of tissue volume, fat content (FF) and glucose uptake (GU) from whole-body [18F]FDG-PET/MR images. Twelve subjects underwent whole-body [18F]FDG-PET/MRI during hyperinsulinemic-euglycemic clamp. Automated analysis of tissue volumes, FF and GU were achieved using image registration to a single atlas image with reference segmentations of 18 volume of interests (VOIs). Manual segmentations by an experienced radiologist were used as reference. Quantification accuracy was assessed with Dice scores, group comparisons and correlations. VOI Dice scores ranged from 0.93 to 0.32. Muscles, brain, VAT and liver showed the highest scores. Pancreas, large and small intestines demonstrated lower segmentation accuracy and poor correlations. Estimated tissue volumes differed significantly in 8 cases. Tissue FFs were often slightly but significantly overestimated. Satisfactory agreements were observed in most tissue GUs. Automated tissue identification and characterization using a single atlas segmentation performs well compared to manual segmentation in most tissues and will be valuable in future studies. In certain tissues, alternative quantification methods or improvements to the current approach is needed.
  •  
28.
  • Khonsari, R H, et al. (author)
  • Shape and volume of craniofacial cavities in intentional skull deformations
  • 2013
  • In: American Journal of Physical Anthropology. - : Wiley. - 0002-9483 .- 1096-8644. ; 151:1, s. 110-119
  • Journal article (peer-reviewed)abstract
    • Intentional cranial deformations (ICD) have been observed worldwide but are especially prevalent in preColombian cultures. The purpose of this study was to assess the consequences of ICD on three cranial cavities (intracranial cavity, orbits, and maxillary sinuses) and on cranial vault thickness, in order to screen for morphological changes due to the external constraints exerted by the deformation device. We acquired CT-scans for 39 deformed and 19 control skulls. We studied the thickness of the skull vault using qualitative and quantitative methods. We computed the volumes of the orbits, of the maxillary sinuses, and of the intracranial cavity using haptic-aided semi-automatic segmentation. We finally defined 3D distances and angles within orbits and maxillary sinuses based on 27 anatomical landmarks and measured these features on the 58 skulls. Our results show specific bone thickness patterns in some types of ICD, with localized thinning in regions subjected to increased pressure and thickening in other regions. Our findings confirm that volumes of the cranial cavities are not affected by ICDs but that the shapes of the orbits and of the maxillary sinuses are modified in circumferential deformations. We conclude that ICDs can modify the shape of the cranial cavities and the thickness of their walls but conserve their volumes. These results provide new insights into the morphological effects associated with ICDs and call for similar investigations in subjects with deformational plagiocephalies and craniosynostoses.
  •  
29.
  • Litjens, Geert, et al. (author)
  • Evaluation of prostate segmentation algorithms for MRI : The PROMISE12 challenge
  • 2014
  • In: Medical Image Analysis. - : Elsevier BV. - 1361-8415 .- 1361-8423. ; 18:2, s. 359-373
  • Journal article (peer-reviewed)abstract
    • Prostate MRI image segmentation has been an area of intense research due to the increased use of MRI as a modality for the clinical workup of prostate cancer. Segmentation is useful for various tasks, e.g. to accurately localize prostate boundaries for radiotherapy or to initialize multi-modal registration algorithms. In the past, it has been difficult for research groups to evaluate prostate segmentation algorithms on multi-center, multi-vendor and multi-protocol data. Especially because we are dealing with MR images, image appearance, resolution and the presence of artifacts are affected by differences in scanners and/or protocols, which in turn can have a large influence on algorithm accuracy. The Prostate MR Image Segmentation (PROMISE12) challenge was setup to allow a fair and meaningful comparison of segmentation methods on the basis of performance and robustness. In this work we will discuss the initial results of the online PROMISE12 challenge, and the results obtained in the live challenge workshop hosted by the MICCAI2012 conference. In the challenge, 100 prostate MR cases from 4 different centers were included, with differences in scanner manufacturer, field strength and protocol. A total of 11 teams from academic research groups and industry participated. Algorithms showed a wide variety in methods and implementation, including active appearance models, atlas registration and level sets. Evaluation was performed using boundary and volume based metrics which were combined into a single score relating the metrics to human expert performance. The winners of the challenge where the algorithms by teams Imorphics and ScrAutoProstate, with scores of 85.72 and 84.29 overall. Both algorithms where significantly better than all other algorithms in the challenge (p < 0.05) and had an efficient implementation with a run time of 8 min and 3 s per case respectively. Overall, active appearance model based approaches seemed to outperform other approaches like multi-atlas registration, both on accuracy and computation time. Although average algorithm performance was good to excellent and the Imorphics algorithm outperformed the second observer on average, we showed that algorithm combination might lead to further improvement, indicating that optimal performance for prostate segmentation is not yet obtained. All results are available online at http://promise12.grand-challenge.org/. (C) 2013 Elsevier B.V. All rights reserved.
  •  
30.
  •  
31.
  • Malmberg, Filip, et al. (author)
  • A 3D live-wire segmentation method for volume images using haptic interaction
  • 2006
  • In: DISCRETE GEOMETRY FOR COMPUTER IMAGERY, PROCEEDINGS 4245. ; , s. 663-673
  • Conference paper (peer-reviewed)abstract
    • Designing interactive segmentation methods for digital volume images is difficult, mainly because efficient 3D interaction is muchharder to achieve than interaction with 2D images. To overcome this issue, we use a system that combines stereo graphics and haptics to facilitate efficient 3D interaction. We propose a new method, based on the 2D live-wire method, for segmenting volume images. Our method consists of two parts: an interface for drawing 3D live-wire curves onto the boundary of an object in a volume image, and an algorithm for connecting two such curves to create a discrete surface.
  •  
32.
  • Malmberg, Filip, 1980-, et al. (author)
  • A Graph-based Framework for Sub-pixel Image Segmentation
  • 2011
  • In: Theoretical Computer Science. - : Elsevier BV. - 0304-3975 .- 1879-2294. ; 412:15, s. 1338-1349
  • Journal article (peer-reviewed)abstract
    • Many image segmentation methods utilize graph structures for representing images, where the flexibility and generality of the abstract structure is beneficial. By using a fuzzy object representation, i.e., allowing partial belongingness of elements to image objects, the unavoidable loss of information when representing continuous structures by finite sets is significantly reduced,enabling feature estimates with sub-pixel precision.This work presents a framework for object representation based on fuzzysegmented graphs. Interpreting the edges as one-dimensional paths betweenthe vertices of a graph, we extend the notion of a graph cut to that of a located cut, i.e., a cut with sub-edge precision. We describe a method for computing a located cut from a fuzzy segmentation of graph vertices. Further,the notion of vertex coverage segmentation is proposed as a graph theoretic equivalent to pixel coverage segmentations and a method for computing such a segmentation from a located cut is given. Utilizing the proposed framework,we demonstrate improved precision of area measurements of synthetic two-dimensional objects. We emphasize that although the experiments presented here are performed on two-dimensional images, the proposed framework is defined for general graphs and thus applicable to images of any dimension.
  •  
33.
  •  
34.
  •  
35.
  • Malmberg, Filip, et al. (author)
  • An efficient algorithm for exact evaluation of stochastic watersheds
  • 2014
  • In: Pattern Recognition Letters. - : Elsevier BV. - 0167-8655 .- 1872-7344. ; 47, s. 80-84
  • Journal article (peer-reviewed)abstract
    • The stochastic watershed is a method for unsupervised image segmentation proposed by Angulo and Jeulin (2007). The method first computes a probability density function (PDF), assigning to each piece of contour in the image the probability to appear as a segmentation boundary in seeded watershed segmentation with randomly selected seeds. Contours that appear with high probability are assumed to be more important. This PDF is then post-processed to obtain a final segmentation. The main computational hurdle with the stochastic watershed method is the calculation of the PDF. In the original publication by Angulo and Jeulin, the PDF was estimated by Monte Carlo simulation, i.e., repeatedly selecting random markers and performing seeded watershed segmentation. Meyer and Stawiaski (2010) showed that the PDF can be calculated exactly, without performing any Monte Carlo simulations, but do not provide any implementation details. In a naive implementation, the computational cost of their method is too high to make it useful in practice. Here, we extend the work of Meyer and Stawiaski by presenting an efficient (quasi-linear) algorithm for exact computation of the PDF. We demonstrate that in practice, the proposed method is faster than any previously reported method by more than two orders of magnitude. The algorithm is formulated for general undirected graphs, and thus trivially generalizes to images with any number of dimensions.
  •  
36.
  •  
37.
  • Malmberg, Filip, et al. (author)
  • Binarization of phase contrast volume images of fibrous materials
  • 2009
  • In: Proceedings of the 4th International Conference on Computer Vision Theory and Applications. - : INSTICC Press. - 9789898111692 ; , s. 148-153
  • Conference paper (peer-reviewed)abstract
    • In this paper, we present a method for segmenting phase contrast volume images of fibrous materials into fibre and background. The method is based on graph cut segmentation, and s tested on high resolution X-ray microtomography volume images of wood fibres in paper an composites. The new method produces better results than a standard method based on edge-preserving smoothing and hysteresis thresholding. The most important improvement is that the proposed method handles thick and collapsed fibres more accurately than previous methods.
  •  
38.
  •  
39.
  • Malmberg, Filip, et al. (author)
  • Evaluation and control of inventory distribution systems with quantity based shipment consolidation
  • 2023
  • In: Naval Research Logistics. - : Wiley. - 0894-069X .- 1520-6750. ; 70:2, s. 205-227
  • Journal article (peer-reviewed)abstract
    • Joint consideration of inventory and shipment decisions is an important aspect of obtaining economically and environmentally sustainable distribution systems. We consider this issue in the context of a one-warehouse-multiple-retailer inventory system with quantity-based shipment consolidation to groups of nonidentical retailers facing Poisson demand. The system is centralized, with free information sharing and access to real-time point of sale data. Thus, demand information at the retailers is immediately conveyed upstream to the warehouse without any fixed ordering costs. However, fixed costs associated with handling and transporting goods from the warehouse to the retailers are reflected in the shipment consolidation policy. Stock replenishments at the warehouse are made from an outside supplier/manufacturer according to an (R, Q) policy. For this system, we derive an exact recursive method for determining the inventory level distributions at the retailers. This allows us to evaluate the expected inventory and shipment costs, fill-rates, and transport emissions for the entire system. We also show how to optimize the system by providing bounds on the optimal shipment quantities and the warehouse reorder level.
  •  
40.
  • Malmberg, Filip, et al. (author)
  • Exact evaluation of stochastic watersheds : From trees to general graphs
  • 2014
  • In: Discrete Geometry for Computer Imagery. - Berlin, Heidelberg : Springer Berlin/Heidelberg. - 9783319099545 ; 8668, s. 309-319
  • Conference paper (peer-reviewed)abstract
    • The stochastic watershed is a method for identifying salient contours in an image, with applications to image segmentation. The method computes a probability density function (PDF), assigning to each piece of contour in the image the probability to appear as a segmentation boundary in seeded watershed segmentation with randomly selected seedpoints. Contours that appear with high probability are assumed to be more important. This paper concerns an efficient method for computing the stochastic watershed PDF exactly, without performing any actual seeded watershed computations. A method for exact evaluation of stochastic watersheds was proposed by Meyer and Stawiaski (2010). Their method does not operate directly on the image, but on a compact tree representation where each edge in the tree corresponds to a watershed partition of the image elements. The output of the exact evaluation algorithm is thus a PDF defined over the edges of the tree. While the compact tree representation is useful in its own right, it is in many cases desirable to convert the results from this abstract representation back to the image, e. g, for further processing. Here, we present an efficient linear time algorithm for performing this conversion.
  •  
41.
  • Malmberg, Filip, 1980-, et al. (author)
  • Exact Evaluation of Targeted Stochastic Watershed Cuts
  • 2017
  • In: Discrete Applied Mathematics. - : Elsevier. - 0166-218X .- 1872-6771. ; 216:2, s. 449-460
  • Journal article (peer-reviewed)abstract
    • Seeded segmentation with minimum spanning forests, also known as segmentation by watershed cuts, is a powerful method for supervised image segmentation. Given that correct segmentation labels are provided for a small set of image elements, called seeds, the watershed cut method completes the labeling for all image elements so that the boundaries between different labels are optimally aligned with salient edges in the image. Here, a randomized version of watershed segmentation, the targeted stochastic watershed, is proposed for performing multi-label targeted image segmentation with stochastic seed input. The input to the algorithm is a set of probability density functions (PDFs), one for each segmentation label, defined over the pixels of the image. For each pixel, we calculate the probability that the pixel is assigned a given segmentation label in seeded watershed segmentation with seeds drawn from the input PDFs. We propose an efficient algorithm (quasi-linear with respect to the number of image elements) for calculating the desired probabilities exactly.
  •  
42.
  • Malmberg, Filip, et al. (author)
  • Faster Fuzzy Connectedness via Precomputation
  • 2013
  • In: Mathematical Morphology and Its Applications to Signal and Image Processing. - : Springer. ; , s. 476-483
  • Conference paper (peer-reviewed)
  •  
43.
  • Malmberg, Filip, 1980-, et al. (author)
  • Generalized Hard Constraints for Graph Segmentation
  • 2011
  • In: Image Analysis, 17th Scandinavian Conference. SCIA 2011.. - : Springer.
  • Conference paper (peer-reviewed)abstract
    • Graph-based methods have become well-established tools for image segmentation. Viewing the image as a weighted graph, these methods seek to extract a graph cut that best matches the image content.Many of these methods are interactive, in that they allow a human operator to guide the segmentation process by specifying a set of hard constraints that the cut must satisfy. Typically, these constraints are given in one of two forms: regional constraints (a set of vertices that must be separated by the cut) or boundary constraints (a set of edges that must be included in the cut). Here, we propose a new type of hard constraints,that includes both regional constraints and boundary constraints as special cases. We also present an efficient method for computing cuts that satisfy a set of generalized constraints, while globally minimizing a graph-cut measure.
  •  
44.
  • Malmberg, Filip, 1980- (author)
  • Graph-based Methods for Interactive Image Segmentation
  • 2011
  • Doctoral thesis (other academic/artistic)abstract
    • The subject of digital image analysis deals with extracting relevant information from image data, stored in digital form in a computer. A fundamental problem in image analysis is image segmentation, i.e., the identification and separation of relevant objects and structures in an image. Accurate segmentation of objects of interest is often required before further processing and analysis can be performed.Despite years of active research, fully automatic segmentation of arbitrary images remains an unsolved problem. Interactive, or semi-automatic, segmentation methods use human expert knowledge as additional input, thereby making the segmentation problem more tractable. The goal of interactive segmentation methods is to minimize the required user interaction time, while maintaining tight user control to guarantee the correctness of the results. Methods for interactive segmentation typically operate under one of two paradigms for user guidance: (1) Specification of pieces of the boundary of the desired object(s). (2) Specification of correct segmentation labels for a small subset of the image elements. These types of user input are referred to as boundary constraints and regional constraints, respectively.This thesis concerns the development of methods for interactive segmentation, using a graph-theoretic approach. We view an image as an edge weighted graph, whose vertex set is the set of image elements, and whose edges are given by an adjacency relation among the image elements. Due to its discrete nature and mathematical simplicity, this graph based image representation lends itself well to the development of efficient, and provably correct, methods.The contributions in this thesis may be summarized as follows:Existing graph-based methods for interactive segmentation are modified to improve their performance on images with noisy or missing data, while maintaining a low computational cost.Fuzzy techniques are utilized to obtain segmentations from which feature measurements can be made with increased precision.A new paradigm for user guidance, that unifies and generalizes regional and boundary constraints, is proposed.The practical utility of the proposed methods is illustrated with examples from the medical field.
  •  
45.
  •  
46.
  •  
47.
  • Malmberg, Filip, 1980- (author)
  • Image Foresting Transform: On-the-fly Computation of Segmentation Boundaries
  • 2011
  • In: Image Analysis: 17th Scandinavian Conference, SCIA 2011. - : Springer.
  • Conference paper (peer-reviewed)abstract
    • The Image Foresting Transform (IFT) is a framework for seeded image segmentation, based on the computation of minimal cost paths in a discrete representation of an image. In two recent publications, we have shown that the segmentations obtained by the IFT may be improved by refining the segmentation locally around the boundariesbetween segmented regions. Since these methods operate on a small subset of the image elements only, they may be implemented efficiently if the set of boundary elements is known. Here, we show that this set maybe obtained on-the-fly, at virtually no additional cost, as a by-product of the IFT algorithm.
  •  
48.
  • Malmberg, Filip, et al. (author)
  • Interactive Deformation of Volume Images for Image Registration
  • 2015
  • In: Proc. Interactive Medical Image Computing Workshop.
  • Conference paper (peer-reviewed)abstract
    • Deformable image registration, the task of nding a spatial transformation that aligns two or more images with each other, is an important task in medical image analysis. To a large extent, research on image registration has been focused on automatic methods. This is in contrast to, e.g., image segmentation, where interactive semi-automatic methods are common. Here, we propose a method for interactive editing of a deformation eld aligning two volume images. The method has been implemented in a software that allows the user to click and drag points in the deformed image to a new location, while smoothly deforming surrounding points. The method is fast enough to allow real-time display of the deformed volume image during user interaction, on standard hardware. The resulting tool is useful for initializing automatic methods, and to correct errors in automatically generated registrations.
  •  
49.
  • Malmberg, Filip, et al. (author)
  • Interactive Segmentation with Relaxed Image Foresting Transforms
  • 2009
  • In: Proceedings of SSBA 2009. - 9789163339240
  • Conference paper (other academic/artistic)abstract
    • The Image Foresting Transform (IFT) is a framework for efficient image partitioning, used in interactive segmentation. We propose a modified version of the IFT, and demonstratethat the modified algorithm is more robust to noise while maintaining computational complexity. We also show an application of the method for interactive segmentation ofback muscles in magnetic resonance images, where seedpoints representing object and background are placed repeatedly until a desired segmentation result is obtained.
  •  
50.
  • Malmberg, Filip, et al. (author)
  • Measurement of fibre-fibre contact in three-dimensional images of fibrous materials obtained from X-ray synchrotron microtomography
  • 2011
  • In: Nuclear Instruments and Methods in Physics Research Section A. - : Elsevier BV. - 0168-9002 .- 1872-9576. ; 637:1, s. 143-148
  • Journal article (peer-reviewed)abstract
    • A series of wood-fibre mats was investigated using high-resolution phase-contrast microtomography at the beamline ID 19 of the European Synchrotron Radiation Facility in Grenoble, France. A method for data reduction to quantify the degree of fibre-fibre contact has been derived. The degree of fibre-fibre contact and bonding plays a fundamental role in the mechanical properties of cellulose-fibre mats, paper materials and cellulose-fibre composites. The proposed computerised automated method consists of two parts. First, fibre lumens are segmented using a watershed based method. This information is then used to identify fibre-fibre contacts in projections along the z-axis of the material. The method is tested on microtomographic images of mats made of wood pulp fibres, and is shown to successfully detect differences in the amount of fibre-fibre contact between samples. The degree of fibre-fibre contact correlates well with measured out-of-plane strength of the fibrous material.
  •  
Skapa referenser, mejla, bekava och länka
  • Result 1-50 of 105
Type of publication
journal article (41)
conference paper (39)
other publication (11)
doctoral thesis (8)
book chapter (2)
editorial collection (1)
show more...
reports (1)
book (1)
licentiate thesis (1)
show less...
Type of content
peer-reviewed (69)
other academic/artistic (35)
pop. science, debate, etc. (1)
Author/Editor
Malmberg, Filip (62)
Malmberg, Filip, 198 ... (40)
Strand, Robin, 1978- (22)
Nyström, Ingela (21)
Kullberg, Joel (17)
Ahlström, Håkan, 195 ... (16)
show more...
Kullberg, Joel, 1979 ... (15)
Strand, Robin (14)
Ahlström, Håkan (10)
Lindblad, Joakim (10)
Nordenskjöld, Richar ... (8)
Lind, Lars (7)
Johansson, Lars (6)
Sjöholm, Therese (6)
Sintorn, Ida-Maria (5)
Borgefors, Gunilla (5)
Nyström, Ingela, 196 ... (5)
Larsson, Elna-Marie (4)
Ekström, Simon (4)
Östlund, Catherine (4)
Bengtsson, Ewert (4)
Svensson, Stina (4)
Breznik, Eva (4)
Berglund, Johan (3)
Andersson, Axel (3)
Wählby, Carolina, pr ... (3)
Sladoje, Nataša (3)
Thor, Andreas (3)
Luengo Hendriks, Cri ... (3)
Axelsson, Maria (3)
Nygård, Per (3)
Nysjö, Fredrik (3)
Larsson, Sune (2)
Örberg, Jan (2)
Sundbom, Magnus (2)
Eriksson, Jan W. (2)
Ahmad, Shafqat (2)
Fall, Tove, 1979- (2)
Menzel, Uwe (2)
Lindström, Mikael (2)
Karlsson, Helen (2)
Almgren, Karin (2)
Behanova, Andrea (2)
Rönn, Monika (2)
Marklund, Johan (2)
Seipel, Stefan (2)
Söderberg, Per G. (2)
Ayyalasomayajula, Ka ... (2)
Brun, Anders, 1976- (2)
Stattin, Mikael, 195 ... (2)
show less...
University
Uppsala University (100)
Swedish University of Agricultural Sciences (14)
RISE (3)
Umeå University (2)
Royal Institute of Technology (2)
Linköping University (2)
show more...
Lund University (2)
Karolinska Institutet (2)
University of Gävle (1)
Chalmers University of Technology (1)
show less...
Language
English (104)
Swedish (1)
Research subject (UKÄ/SCB)
Engineering and Technology (45)
Natural sciences (42)
Medical and Health Sciences (24)
Social Sciences (2)

Year

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Close

Copy and save the link in order to return to this view