SwePub
Sök i SwePub databas

  Extended search

Träfflista för sökning "L773:0169 2607 srt2:(2020-2024)"

Search: L773:0169 2607 > (2020-2024)

  • Result 1-16 of 16
Sort/group result
   
EnumerationReferenceCoverFind
1.
  • Aghanavesi, Somayeh, 1981-, et al. (author)
  • A multiple motion sensors index for motor state quantification in Parkinson's disease
  • 2020
  • In: Computer Methods and Programs in Biomedicine. - : Elsevier BV. - 0169-2607 .- 1872-7565. ; 189
  • Journal article (peer-reviewed)abstract
    • Aim: To construct a Treatment Response Index from Multiple Sensors (TRIMS) for quantification of motor state in patients with Parkinson's disease (PD) during a single levodopa dose. Another aim was to compare TRIMS to sensor indexes derived from individual motor tasks. Method: Nineteen PD patients performed three motor tests including leg agility, pronation-supination movement of hands, and walking in a clinic while wearing inertial measurement unit sensors on their wrists and ankles. They performed the tests repeatedly before and after taking 150% of their individual oral levodopa-carbidopa equivalent morning dose.Three neurologists blinded to treatment status, viewed patients’ videos and rated their motor symptoms, dyskinesia, overall motor state based on selected items of Unified PD Rating Scale (UPDRS) part III, Dyskinesia scale, and Treatment Response Scale (TRS). To build TRIMS, out of initially 178 extracted features from upper- and lower-limbs data, 39 features were selected by stepwise regression method and were used as input to support vector machines to be mapped to mean reference TRS scores using 10-fold cross-validation method. Test-retest reliability, responsiveness to medication, and correlation to TRS as well as other UPDRS items were evaluated for TRIMS. Results: The correlation of TRIMS with TRS was 0.93. TRIMS had good test-retest reliability (ICC = 0.83). Responsiveness of the TRIMS to medication was good compared to TRS indicating its power in capturing the treatment effects. TRIMS was highly correlated to dyskinesia (R = 0.85), bradykinesia (R = 0.84) and gait (R = 0.79) UPDRS items. Correlation of sensor index from the upper-limb to TRS was 0.89. Conclusion: Using the fusion of upper- and lower-limbs sensor data to construct TRIMS provided accurate PD motor states estimation and responsive to treatment. In addition, quantification of upper-limb sensor data during walking test provided strong results. © 2019
  •  
2.
  • Ahkami, Bahareh, 1994, et al. (author)
  • Locomotion Decoding (LocoD) An Open-Source Modular Platform for Researching Control of Lower Limb Assistive Devices.
  • 2023
  • In: Computer Methods and Programs in Biomedicine. - 1872-7565 .- 0169-2607.
  • Journal article (peer-reviewed)abstract
    • Background and Objective: Commercially available motorized prosthetic legs use exclusively non-biological signals to control movements, such as those provided by load cells, pressure sensors, and inertial measurement units (IMUs). Despite that the use of biological signals of neuromuscular origin can provide more natural control of leg prostheses, these signals cannot yet be captured and decoded reliably enough to be used in daily life. Indeed, decoding motor intention from bioelectric signals obtained from the residual limb holds great potential, and therefore the study of decoding algorithms has increased in the past years with standardized methods yet to be established. Methods: In the absence of shared tools to record and process lower limb bioelectric signals, such as electromyography (EMG), we developed an open-source software platform to unify the recording and processing (pre-processing, feature extraction, and classification) of EMG and non-biological signals amongst researchers with the goal of investigating and benchmarking control algorithms. We validated our locomotion decoding (LocoD) software by comparing the accuracy in the classification of locomotion mode using three different combinations of sensors (1 = IMU+EMG, 2 = EMG, 3 = IMU). EMG and non-biological signals (from the IMU and pressure sensor) were recorded while able-bodied participants (n = 21) walked on different surfaces such as stairs and ramps, and this data set is also released publicly along this publication. LocoD was used for all recording, pre-processing, feature extraction, and classification of the recorded signals. We tested the statistical hypothesis that there was a difference in predicted locomotion mode accuracy between sensor combinations using the Wilcoxon signed-rank test. Results: We found that the sensor combination 1 (EMG+IMU) led to significantly more accurate and improved locomotion mode prediction (Accuracy=93.4 ± 3.9) than using EMG (Accuracy= 74.56 ± 5.8) or IMU alone (Accuracy=90.77 ± 4.6) with p-value < 0.001. Conclusions: Our results support previous research and validate the functionality of LocoD as an open-source and modular platform to research control algorithms for prosthetic legs that incorporate bioelectric signals.
  •  
3.
  • Beháňová, Andrea, et al. (author)
  • gACSON software for automated segmentation and morphology analyses of myelinated axons in 3D electron microscopy
  • 2022
  • In: Computer Methods and Programs in Biomedicine. - : Elsevier. - 0169-2607 .- 1872-7565. ; 220
  • Journal article (peer-reviewed)abstract
    • Background and Objective: Advances in electron microscopy (EM) now allow three-dimensional (3D) imaging of hundreds of micrometers of tissue with nanometer-scale resolution, providing new opportunities to study the ultrastructure of the brain. In this work, we introduce a freely available Matlab-based gACSON software for visualization, segmentation, assessment, and morphology analysis of myelinated axons in 3D-EM volumes of brain tissue samples.Methods: The software is equipped with a graphical user interface (GUI). It automatically segments the intra-axonal space of myelinated axons and their corresponding myelin sheaths and allows manual segmentation, proofreading, and interactive correction of the segmented components. gACSON analyzes the morphology of myelinated axons, such as axonal diameter, axonal eccentricity, myelin thickness, or gratio.Results: We illustrate the use of the software by segmenting and analyzing myelinated axons in six 3DEM volumes of rat somatosensory cortex after sham surgery or traumatic brain injury (TBI). Our results suggest that the equivalent diameter of myelinated axons in somatosensory cortex was decreased in TBI animals five months after the injury.Conclusion: Our results indicate that gACSON is a valuable tool for visualization, segmentation, assessment, and morphology analysis of myelinated axons in 3D-EM volumes. It is freely available at https://github.com/AndreaBehan/g-ACSON under the MIT license.
  •  
4.
  • Caruso, Camillo Maria, et al. (author)
  • A deep learning approach for overall survival prediction in lung cancer with missing values
  • 2024
  • In: Computer Methods and Programs in Biomedicine. - : Elsevier. - 0169-2607 .- 1872-7565. ; 254
  • Journal article (peer-reviewed)abstract
    • Background and Objective: In the field of lung cancer research, particularly in the analysis of overall survival (OS), artificial intelligence (AI) serves crucial roles with specific aims. Given the prevalent issue of missing data in the medical domain, our primary objective is to develop an AI model capable of dynamically handling this missing data. Additionally, we aim to leverage all accessible data, effectively analyzing both uncensored patients who have experienced the event of interest and censored patients who have not, by embedding a specialized technique within our AI model, not commonly utilized in other AI tasks. Through the realization of these objectives, our model aims to provide precise OS predictions for non-small cell lung cancer (NSCLC) patients, thus overcoming these significant challenges.Methods: We present a novel approach to survival analysis with missing values in the context of NSCLC, which exploits the strengths of the transformer architecture to account only for available features without requiring any imputation strategy. More specifically, this model tailors the transformer architecture to tabular data by adapting its feature embedding and masked self-attention to mask missing data and fully exploit the available ones. By making use of ad-hoc designed losses for OS, it is able to account for both censored and uncensored patients, as well as changes in risks over time.Results: We compared our method with state-of-the-art models for survival analysis coupled with different imputation strategies. We evaluated the results obtained over a period of 6 years using different time granularities obtaining a Ct-index, a time-dependent variant of the C-index, of 71.97, 77.58 and 80.72 for time units of 1 month, 1 year and 2 years, respectively, outperforming all state-of-the-art methods regardless of the imputation method used.Conclusions: The results show that our model not only outperforms the state-of-the-art's performance but also simplifies the analysis in the presence of missing data, by effectively eliminating the need to identify the most appropriate imputation strategy for predicting OS in NSCLC patients.
  •  
5.
  • Cava, José Manuel Gonzáles, et al. (author)
  • Robust PID control of propofol anaesthesia: uncertainty limits performance, not PID structure
  • 2021
  • In: Computer Methods and Programs in Biomedicine. - : Elsevier BV. - 0169-2607. ; 198, s. 1-1
  • Journal article (peer-reviewed)abstract
    • Background and objective: New proposals to improve the regulation of hypnosis in anaesthesia based on the development of advanced control structures emerge continuously. However, a fair study to analyse the real benefits of these structures compared to simpler clinically validated PID-based solutions has not been presented so far. The main objective of this work is to analyse the performance limitations associated with using a filtered PID controller, as compared to a high-order controller, represented through a Youla parameter.Methods: The comparison consists of a two-steps methodology. First, two robust optimal filtered PID controllers, considering the effect of the inter-patient variability, are synthesised. A set of 47 validated paediatric pharmacological models, identified from clinical data, is used to this end. This model set provides representative inter-patient variability Second, individualised filtered PID and Youla controllers are synthesised for each model in the set. For fairness of comparison, the same performance objective is optimised for all designs, and the same robustness constraints are considered. Controller synthesis is performed utilising convex optimisation and gradient-based methods relying on algebraic differentiation. The worst-case performance over the patient model set is used for the comparison.Results: Two robust filtered PID controllers for the entire model set, as well as individual-specific PID and Youla controllers, were optimised. All considered designs resulted in similar frequency response characteristics. The performance improvement associated with the Youla controllers was not significant compared to the individually tuned filtered PID controllers. The difference in performance between controllers synthesized for the model set and for individual models was significantly larger than the performance difference between the individual-specific PID and Youla controllers. The different controllers were evaluated in simulation. Although all of them showed clinically acceptable results, the robust solutions provided slower responses.Conclusion: Taking the same clinical and technical considerations into account for the optimisation of the different controllers, the design of individual-specific solutions resulted in only marginal differences in performance when comparing an optimal Youla parameter and its optimal filtered PID counterpart. The inter-patient variability is much more detrimental to performance than the limitations imposed by the simple structure of the filtered PID controller.
  •  
6.
  • Guerrero, Esteban, et al. (author)
  • Forming We-intentions under breakdown situations in human-robot interactions
  • 2023
  • In: Computer Methods and Programs in Biomedicine. - : Elsevier. - 0169-2607 .- 1872-7565. ; 242
  • Journal article (peer-reviewed)abstract
    • Background and Objective: When agents (e.g. a person and a social robot) perform a joint activity to achieve a joint goal, they require sharing a relevant group intention, which has been defined as a We-intention. In forming We-intentions, breakdown situations due to conflicts between internal and “external” intentions are unavoidable, particularly in healthcare scenarios. To study such We-intention formation and “reparation” of conflicts, this paper has a two-fold objective: introduce a general computational mechanism allowing We-intention formation and reparation in interactions between a social robot and a person; and exemplify how the formal framework can be applied to facilitate interaction between a person and a social robot for healthcare scenarios.Method: The formal computational framework for managing We-intentions was defined in terms of Answer set programming and a Belief-Desire-Intention control loop. We exemplify the formal framework based on earlier theory-based user studies consisting of human-robot dialogue scenarios conducted in a Wizard of Oz setup, video-recorded and evaluated with 20 participants. Data was collected through semi-structured interviews, which were analyzed qualitatively using thematic analysis. N=20 participants (women n=12, men=8, age range 23-72) were part of the study. Two age groups were established for the analysis: younger participants (ages 23-40) and older participants (ages 41-72).Results: We proved four theoretical propositions, which are well-desired characteristics of any rational social robot. In our study, most participants suggested that people were the cause of breakdown situations. Over half of the young participants perceived the social robot's avoidant behavior in the scenarios.Conclusions: This work covered in depth the challenge of aligning the intentions of two agents (for example, in a person-robot interaction) when they try to achieve a joint goal. Our framework provides a novel formalization of the We-intentions theory from social science. The framework is supported by formal properties proving that our computational mechanism generates consistent potential plans. At the same time, the agent can handle incomplete and inconsistent intentions shared by another agent (for example, a person). Finally, our qualitative results suggested that this approach could provide an acceptable level of action/intention agreement generation and reparation from a person-centric perspective.
  •  
7.
  •  
8.
  • Lundsberg, Jonathan, et al. (author)
  • Compressed spike-triggered averaging in iterative decomposition of surface EMG
  • 2023
  • In: Computer Methods and Programs in Biomedicine. - : Elsevier BV. - 0169-2607. ; 228
  • Journal article (peer-reviewed)abstract
    • Background and Objective: Analysis of motor unit activity is important for assessing and treating diseases or injuries affecting natural movement. State-of-the-art decomposition translates high-density surface electromyography (HDsEMG) into motor unit activity. However, current decomposition methods offer far from complete separation of all motor units. Methods: This paper proposes a peel-offapproach to automatic decomposition of HDsEMG into motor unit action potential (MUAP) trains, based on the Fast Independent Component Analysis algorithm (FastICA). The novel steps include utilizing compression by means of Principal Component Analysis and spike-triggered averaging, to estimate surface MUAP distributions with less noise, which are iteratively subtracted from the HDsEMG dataset. Furthermore, motor unit spike trains are estimated by highdimensional density-based clustering of peaks in the FastICA source output. And finally, a new reliability measure is used to discard poor motor unit estimates by comparing the variance of the FastICA source output before and after the peel-offstep. The method was validated using reconstructed synthetic data at three different signal-to-noise levels and was compared to an established deflationary FastICA approach. Results: Both algorithms had very high recall and precision, over 90%, for spikes from matching motor units, referred to as matched performance. However, the peel-offalgorithm correctly identified more motor units for all noise levels. When accounting for unidentified motor units, total recall was up to 33 percentage points higher; and when accounting for duplicate estimates, total precision was up to 24 percentage points higher, compared to the state-of-the-art reference. In addition, a comparison was done using experimental data where the proposed algorithm had a matched recall of 97% and precision of 85% with respect to the reference algorithm. Conclusion: These results show a substantial performance increase for decomposition of simulated HDsEMG data and serve to validate the proposed approach. This performance increase is an important step towards complete decomposition and extraction of information of motor unit activity. (C) 2022TheAuthor(s). PublishedbyElsevierB.V.
  •  
9.
  • Mahbod, A., et al. (author)
  • Transfer learning using a multi-scale and multi-network ensemble for skin lesion classification
  • 2020
  • In: Computer Methods and Programs in Biomedicine. - : Elsevier BV. - 0169-2607 .- 1872-7565. ; 193, s. 105475-
  • Journal article (peer-reviewed)abstract
    • Background and objective: Skin cancer is among the most common cancer types in the white population and consequently computer aided methods for skin lesion classification based on dermoscopic images are of great interest. A promising approach for this uses transfer learning to adapt pre-trained convolutional neural networks (CNNs) for skin lesion diagnosis. Since pre-training commonly occurs with natural images of a fixed image resolution and these training images are usually significantly smaller than dermoscopic images, downsampling or cropping of skin lesion images is required. This however may result in a loss of useful medical information, while the ideal resizing or cropping factor of dermoscopic images for the fine-tuning process remains unknown. Methods: We investigate the effect of image size for skin lesion classification based on pre-trained CNNs and transfer learning. Dermoscopic images from the International Skin Imaging Collaboration (ISIC) skin lesion classification challenge datasets are either resized to or cropped at six different sizes ranging from 224 × 224 to 450 × 450. The resulting classification performance of three well established CNNs, namely EfficientNetB0, EfficientNetB1 and SeReNeXt-50 is explored. We also propose and evaluate a multi-scale multi-CNN (MSM-CNN) fusion approach based on a three-level ensemble strategy that utilises the three network architectures trained on cropped dermoscopic images of various scales. Results: Our results show that image cropping is a better strategy compared to image resizing delivering superior classification performance at all explored image scales. Moreover, fusing the results of all three fine-tuned networks using cropped images at all six scales in the proposed MSM-CNN approach boosts the classification performance compared to a single network or a single image scale. On the ISIC 2018 skin lesion classification challenge test set, our MSM-CNN algorithm yields a balanced multi-class accuracy of 86.2% making it the currently second ranked algorithm on the live leaderboard. Conclusions: We confirm that the image size has an effect on skin lesion classification performance when employing transfer learning of CNNs. We also show that image cropping results in better performance compared to image resizing. Finally, a straightforward ensembling approach that fuses the results from images cropped at six scales and three fine-tuned CNNs is shown to lead to the best classification performance.
  •  
10.
  •  
11.
  •  
12.
  • Matuszewski, Damian J., et al. (author)
  • TEM virus images : Benchmark dataset and deep learning classification
  • 2021
  • In: Computer Methods and Programs in Biomedicine. - : Elsevier. - 0169-2607 .- 1872-7565. ; 209
  • Journal article (peer-reviewed)abstract
    • Background and Objective: To achieve the full potential of deep learning (DL) models, such as understanding the interplay between model (size), training strategy, and amount of training data, researchers and developers need access to new dedicated image datasets; i.e., annotated collections of images representing real-world problems with all their variations, complexity, limitations, and noise. Here, we present, describe and make freely available an annotated transmission electron microscopy (TEM) image dataset. It constitutes an interesting challenge for many practical applications in virology and epidemiology; e.g., virus detection, segmentation, classification, and novelty detection. We also present benchmarking results for virus detection and recognition using some of the top-performing (large and small) networks as well as a handcrafted very small network. We compare and evaluate transfer learning and training from scratch hypothesizing that with a limited dataset, transfer learning is crucial for good performance of a large network whereas our handcrafted small network performs relatively well when training from scratch. This is one step towards understanding how much training data is needed for a given task.Methods: The benchmark dataset contains 1245 images of 22 virus classes. We propose a representative data split into training, validation, and test sets for this dataset. Moreover, we compare different established DL networks and present a baseline DL solution for classifying a subset of the 14 most-represented virus classes in the dataset.Results: Our best model, DenseNet201 pre-trained on ImageNet and fine-tuned on the training set, achieved a 0.921 F1-score and 93.1% accuracy on the proposed representative test set.Conclusions: Public and real biomedical datasets are an important contribution and a necessity to increase the understanding of shortcomings, requirements, and potential improvements for deep learning solutions on biomedical problems or deploying solutions in clinical settings. We compared transfer learning to learning from scratch on this dataset and hypothesize that for limited-sized datasets transfer learning is crucial for achieving good performance for large models. Last but not least, we demonstrate the importance of application knowledge in creating datasets for training DL models and analyzing their results.
  •  
13.
  • Souza-Pereira, Leonice, et al. (author)
  • Clinical decision support systems for chronic diseases : A Systematic literature review.
  • 2020
  • In: Computer Methods and Programs in Biomedicine. - : Elsevier BV. - 0169-2607 .- 1872-7565. ; 195, s. 105565-
  • Journal article (peer-reviewed)abstract
    • UNLABELLED: A Clinical Decision Support System (CDSS) aims to assist physicians, nurses and other professionals in decision-making related to the patient's clinical condition. CDSSs deal with pertinent and critical data, and special care should be taken in their design to ensure the development of usable, secure and reliable tools.OBJECTIVE: This paper aims to investigate existing literature dealing with the development process of CDSSs for monitoring chronic diseases, analysing their functionalities and characteristics, and the software engineering representation in their design.METHODS: A systematic literature review (SLR) is conducted to analyse the literature on CDSSs for monitoring chronic diseases and the application of software engineering techniques in their design.RESULTS: Fourteen included studies revealed that the most addressed disease was diabetes (42.8%) and the most commonly proposed approach was diagnostic (85.7%). Regarding data sources, the studies show a predominance on the use of databases (85.7%), with other data sources such as sensors (42.8%) and self-report (28.6%) also being considered. Analysing the representation for engineering techniques, we found Behaviour diagrams (42.8%) to be the most frequent, closely followed by Structural diagrams (35.7%) and others (78.6%) being largely mentioned. Some studies also approached the requirement specification (21.4%). The most common target evaluation was the performance of the system (64.2%) and the most common metric was accuracy (57.1%).CONCLUSION: We conclude that software engineering, in its completeness, has scarce representation in studies focused on the development of CDSSs for chronic diseases.
  •  
14.
  • Souza-Pereira, Leonice, et al. (author)
  • Quality-in-use characteristics for clinical decision support system assessment
  • 2021
  • In: Computer Methods and Programs in Biomedicine. - : Elsevier BV. - 0169-2607 .- 1872-7565. ; 207, s. 106169-106169
  • Journal article (peer-reviewed)abstract
    • Background: Clinical decision support systems (CDSSs) are developed to support healthcare practitioners with decision-making about therapy and diagnosis’ confirmation, among others. Although there are many advantages of using CDSSs, there are still many challenges in their adoption. Therefore, it is essential to ensure the quality of the system, so that it can be used confidently and securely.Objective: This study aims to propose a set of (sub)characteristics which should be considered in evaluating the quality-in-use of CDSSs, based on the ISO/IEC 25010 standard and on existing literature.Methods: We reviewed the existing literature on CDSS assessment and presented a list of quality characteristics evaluated.Results: Ten quality characteristics and 56 sub-characteristics were identified and selected from the literature, in which usability was evaluated the most. An example of a scenario has been presented to illustrate our assessment approach of satisfaction and efficiency as important quality-in-use characteristics to be applied in the evaluation of a CDSS.Conclusion: The proposed approach will contribute in bridging the gap between the quality of CDSSs and their adoption.
  •  
15.
  • Zhou, Yijun, et al. (author)
  • A convolutional neural network-based method for the generation of super-resolution 3D models from clinical CT images
  • 2024
  • In: Computer Methods and Programs in Biomedicine. - : Elsevier. - 0169-2607 .- 1872-7565. ; 245
  • Journal article (peer-reviewed)abstract
    • Background and objectiveThe accurate evaluation of bone mechanical properties is essential for predicting fracture risk based on clinical computed tomography (CT) images. However, blurring and noise in clinical CT images can compromise the accuracy of these predictions, leading to incorrect diagnoses. Although previous studies have explored enhancing trabecular bone CT images to super-resolution (SR), none of these studies have examined the possibility of using clinical CT images from different instruments, typically of lower resolution, as a basis for analysis. Additionally, previous studies rely on 2D SR images, which may not be sufficient for accurate mechanical property evaluation, due to the complex nature of the 3D trabecular bone structures. The objective of this study was to address these limitations.Methods: A workflow was developed that utilizes convolutional neural networks to generate SR 3D models across different clinical CT instruments. The morphological and finite-element-derived mechanical properties of these SR models were compared with ground truth models obtained from micro-CT scans.Results: A significant improvement in analysis accuracy was demonstrated, where the new SR models increased the accuracy by up to 700 % compared with the low-resolution data, i.e. clinical CT images. Additionally, we found that the mixture of different CT image datasets may improve the SR model performance.Conclusions: SR images, generated by convolutional neural networks, outperformed clinical CT images in the determination of morphological and mechanical properties. The developed workflow could be implemented for fracture risk prediction, potentially leading to improved diagnoses and subsequent clinical decision making.
  •  
16.
  • Zhou, Yijun, et al. (author)
  • A convolutional neural network-based method for the generation of super-resolution 3D models from clinical CT images
  • 2024
  • In: Computer Methods and Programs in Biomedicine. - : Elsevier. - 0169-2607 .- 1872-7565. ; 245
  • Journal article (peer-reviewed)abstract
    • Background and ObjectiveThe accurate evaluation of bone mechanical properties is essential for predicting fracture risk based on clinical computed tomography (CT) images. However, blurring and noise in clinical CT images can compromise the accuracy of these predictions, leading to incorrect diagnoses. Although previous studies have explored enhancing trabecular bone CT images to super-resolution (SR), none of these studies have examined the possibility of using clinical CT images from different instruments, typically of lower resolution, as a basis for analysis. Additionally, previous studies rely on 2D SR images, which may not be sufficient for accurate mechanical property evaluation, due to the complex nature of the 3D trabecular bone structures. The objective of this study was to address these limitations.MethodsA workflow was developed that utilizes convolutional neural networks to generate super-resolution 3D models across different clinical CT instruments. The morphological and finite-element-derived mechanical properties of these super-resolution models were compared with ground truth models obtained from micro-CT scans.ResultsA significant improvement in analysis accuracy was demonstrated, where the new SR models increased the accuracy by up to 700% compared with the low-resolution data, i.e. clinical CT images. Additionally, we found that the mixture of different CT image datasets may improve the super-resolution model performance.ConclusionsSuper-resolution images, generated by convolutional neural networks, outperformed clinical CT images in the determination of morphological and mechanical properties. The developed workflow could be implemented for fracture risk prediction, potentially leading to improved diagnoses and subsequent clinical decision making.
  •  
Skapa referenser, mejla, bekava och länka
  • Result 1-16 of 16
Type of publication
journal article (16)
Type of content
peer-reviewed (16)
Author/Editor
Klintström, Benjamin (2)
Klintström, Eva, 195 ... (2)
Persson, Cecilia (2)
Ouhbi, Sofia (2)
Helgason, Benedikt (2)
Cervin, Anton (1)
show more...
Li, Y. (1)
Zhang, YQ (1)
Ortiz Catalan, Max J ... (1)
Medvedev, Alexander, ... (1)
Björkman, Anders (1)
Johansson Buvarp, Do ... (1)
Soltesz, Kristian (1)
Aghanavesi, Somayeh, ... (1)
Westin, Jerker (1)
Bergquist, Filip, 19 ... (1)
Nyholm, Dag (1)
Askmark, Håkan (1)
Memedi, Mevludin, Ph ... (1)
Constantinescu, Radu ... (1)
Spira, J. (1)
Ohlsson, Fredrik, 19 ... (1)
Thomas, Ilias (1)
Ericsson, A. (1)
Memedi, M. (1)
Aquilonius, Sten-Mag ... (1)
Ahmed, Kirstin, 1974 (1)
Ahkami, Bahareh, 199 ... (1)
Kristoffersen, Morte ... (1)
Sierra, Alejandra (1)
Bergh, C. (1)
Soda, Paolo (1)
Antfolk, Christian (1)
Malesevic, Nebojsa (1)
Clements, M (1)
Behanova, Andrea (1)
Wang, Chunliang (1)
Jokitalo, Eija (1)
Maglaveras, N. (1)
Mao, W. (1)
Jakobsen, LH (1)
Bagge Carlson, Fredr ... (1)
Troeng, Olof (1)
Ferguson, Stephen J. (1)
Abdollahzadeh, Ali (1)
Belevich, Ilya (1)
Tohka, Jussi (1)
Sintorn, Ida-Maria, ... (1)
Ioakimidis, I (1)
Lindgren, Helena, Pr ... (1)
show less...
University
Uppsala University (6)
Karolinska Institutet (3)
University of Gothenburg (2)
Umeå University (2)
Linköping University (2)
Lund University (2)
show more...
Chalmers University of Technology (2)
Royal Institute of Technology (1)
Örebro University (1)
Högskolan Dalarna (1)
show less...
Language
English (16)
Research subject (UKÄ/SCB)
Natural sciences (6)
Engineering and Technology (6)
Medical and Health Sciences (5)

Year

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Close

Copy and save the link in order to return to this view