SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Trygg Johan) ;mspu:(conferencepaper)"

Sökning: WFRF:(Trygg Johan) > Konferensbidrag

  • Resultat 1-10 av 15
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Abbasi, Ahtisham Fazeel, et al. (författare)
  • Deep learning architectures for the prediction of YY1-mediated chromatin loops
  • 2023
  • Ingår i: Bioinformatics research and applications. - : Springer. - 9789819970735 - 9789819970742 ; , s. 72-84
  • Konferensbidrag (refereegranskat)abstract
    • YY1-mediated chromatin loops play substantial roles in basic biological processes like gene regulation, cell differentiation, and DNA replication. YY1-mediated chromatin loop prediction is important to understand diverse types of biological processes which may lead to the development of new therapeutics for neurological disorders and cancers. Existing deep learning predictors are capable to predict YY1-mediated chromatin loops in two different cell lines however, they showed limited performance for the prediction of YY1-mediated loops in the same cell lines and suffer significant performance deterioration in cross cell line setting. To provide computational predictors capable of performing large-scale analyses of YY1-mediated loop prediction across multiple cell lines, this paper presents two novel deep learning predictors. The two proposed predictors make use of Word2vec, one hot encoding for sequence representation and long short-term memory, and a convolution neural network along with a gradient flow strategy similar to DenseNet architectures. Both of the predictors are evaluated on two different benchmark datasets of two cell lines HCT116 and K562. Overall the proposed predictors outperform existing DEEPYY1 predictor with an average maximum margin of 4.65%, 7.45% in terms of AUROC, and accuracy, across both of the datases over the independent test sets and 5.1%, 3.2% over 5-fold validation. In terms of cross-cell evaluation, the proposed predictors boast maximum performance enhancements of up to 9.5% and 27.1% in terms of AUROC over HCT116 and K562 datasets.
  •  
2.
  • Asim, Muhammad Nabeel, et al. (författare)
  • L2S-MirLoc : A Lightweight Two Stage MiRNA Sub-Cellular Localization Prediction Framework
  • 2021
  • Ingår i: Proceedings of the International Joint Conference on Neural Networks. - : IEEE. - 9780738133669 - 9781665439008 - 9781665445979
  • Konferensbidrag (refereegranskat)abstract
    • A comprehensive understanding of miRNA sub-cellular localization may leads towards better understanding of physiological processes and support the fixation of diverse irregularities present in a variety of organisms. To date, diverse computational methodologies have been proposed to automatically infer sub-cellular localization of miR-NAs solely using sequence information, however, existing approaches lack in performance. Considering the success of data transformation approaches in Natural Language Processing which primarily transform multi-label classification problem into multi-class classification problem, here, we introduce three different data transformation approaches namely binary relevance, label power set, and classifier chains. Using data transformation approaches, at 1st stage, multi-label miRNA sub-cellular localization problem is transformed into multi-class problem. Then, at 2nd stage, 3 different machine learning classifiers are used to estimate which classifier performs better with what data transformation approach for hand on task. Empirical evaluation on independent test set indicates that L2S-MirLoc selected combination based on binary relevance and deep random forest outperforms state-of-the-art performance values by significant margin.
  •  
3.
  • Funehag, Johan, et al. (författare)
  • Rock Excavation Cycle and Its Effect on Grouting
  • 2019
  • Ingår i: ISRM 9th Nordic Grouting Symposium. - : International Society for Rock Mechanics and Rock Engineering. - 9789517586481 ; , s. 47-59
  • Konferensbidrag (övrigt vetenskapligt/konstnärligt)abstract
    • The drill and blast rock excavation cycle involves several processes that affects the grouting; vibration from drilling, borehole flushing, the blasting itself and water-loss measurements in boreholes. This research project focused on the effects of water-loss measurements, drilling (both vibration and flushing) and blasting on the grout and during the first five hours of the hardening process. A conceptual model was derived to explain what forces acts on the grout and how the forces can be interpreted in order to reveal how these forces effects the grout. The model suggests that the shear modulus of the grout is a key parameter for understanding the degradation of the grout. The different forces/stresses during an excavation effects the grout differently but the blasting is by far the most difficult process to describe. One starting point of the project is that the grout does not reinforce the rock and by that should not hinder the gas expansion. The blasting should generate new fractures around the blast hole and the gas will penetrate these cracks and finally the expansion of gases should cause fragmentation and movement in the rock mass. The paper describes the results from the field test and how the grout has been characterized using a rheometer. The field test was conducted in a short tunnel niche. The effects from the drill- and blast cycle were studied i.e. vibrations from drilling, boreholes flushing/water-loss measurements and vibrations from blasting. Five boreholes with a centrum distance of 1 m were chosen to be monitored, due to its hydraulic connectivity between the boreholes. Four of these boreholes were successfully grouted by following a grouting design and one borehole was left un-grouted and used for blasting. The effect on grouting in the rock mass from hole of blasting was measured using water-loss measurements in adjacent boreholes. One result of the field test is that the grout was not affected by the blasting under the circumstances used. Instead the study revealed that the water loss measurements affected the connected boreholes negatively.
  •  
4.
  • Khalid, Nabeel, et al. (författare)
  • Bounding box is all you need : learning to segment cells in 2D microscopic images via box annotations
  • 2024
  • Ingår i: Medical image understanding and analysis. - Cham : Springer. - 9783031669545 - 9783031669552 ; , s. 314-328
  • Konferensbidrag (refereegranskat)abstract
    • Microscopic imaging plays a pivotal role in various fields of science and medicine, offering invaluable insights into the intricate world of cellular biology. At the heart of this endeavor lies the need for accurate identification and characterization of individual cells within these images. Deep learning-based cell segmentation, which involves delineating cells from complex microscopic images, is pivotal for cell analysis. It serves as the foundation for extracting meaningful information about cell morphology, spatial organization, and interactions. However, traditional deep-learning models for cell segmentation require extensive and expensive annotation masks for each cell in the image, posing a significant challenge. To address this issue, this study introduces CellBoxify, a novel pipeline that streamlines cell instance segmentation. Unlike traditional methods, CellBoxify operates solely on bounding box annotations, making it approximately seven times faster than manual segmentation mask annotation for each cell. The proposed approach’s effectiveness is evident in its performance on the LIVECell dataset, a well-known resource for cell segmentation research. Achieving 83.40% of the fully supervised performance on this dataset demonstrates the efficacy of the proposed method.
  •  
5.
  • Khalid, Nabeel, et al. (författare)
  • CellGenie: an end-to-end pipeline for synthetic cellular data generation and segmentation : a use case for cell segmentation in microscopic images
  • 2024
  • Ingår i: Medical image understandingand analysis. - : Springer Nature. - 9783031669545 - 9783031669552 ; , s. 387-401
  • Konferensbidrag (refereegranskat)abstract
    • Cellular imaging plays a pivotal role in understanding various biological processes and diseases, making accurate cell segmentation indispensable for many biomedical applications. However, traditional methods for cell segmentation often rely on manual annotation, which is labor-intensive and time-consuming. Deep learning-based approaches for cell segmentation have shown promising results, but they require a vast amount of annotated data for training. In this context, this study presents CellGenie, an end-to-end pipeline designed to address the challenge of data scarcity in deep learning-based cell segmentation. This research proposes an innovative approach for automatic synthetic data generation tailored for microscopic image analysis. Leveraging the rich information provided by the LIVECell dataset, CellGenie generates synthetic microscopic images along with their corresponding segmentation masks for individual cells. By seamlessly integrating this synthetic data into the training process, this study enhances the performance of cell segmentation models beyond the limitations of existing annotated dataset. Furthermore, extensive experimentations are conducted to evaluate the efficacy of the generated data across various experimental scenarios. The results demonstrate the substantial impact of synthetic data generation in improving the robustness and generalization of cell segmentation models.
  •  
6.
  • Khalid, Nabeel, et al. (författare)
  • DeepCeNS : An end-to-end Pipeline for Cell and Nucleus Segmentation in Microscopic Images
  • 2021
  • Ingår i: Proceedings of the International Joint Conference on Neural Networks. - : IEEE. - 9780738133669 - 9781665439008 - 9781665445979
  • Konferensbidrag (refereegranskat)abstract
    • With the evolution of deep learning in the past decade, more biomedical related problems that seemed strenuous, are now feasible. The introduction of U-net and Mask R-CNN architectures has paved a way for many object detection and segmentation tasks in numerous applications ranging from security to biomedical applications. In the cell biology domain, light microscopy imaging provides a cheap and accessible source of raw data to study biological phenomena. By leveraging such data and deep learning techniques, human diseases can be easily diagnosed and the process of treatment development can be greatly expedited. In microscopic imaging, accurate segmentation of individual cells is a crucial step to allow better insight into cellular heterogeneity. To address the aforementioned challenges, DeepCeNS is proposed in this paper to detect and segment cells and nucleus in microscopic images. We have used EVICAN2 dataset which contains microscopic images from a variety of microscopes having numerous cell cultures, to evaluate the proposed pipeline. DeepCeNS outperforms EVICAN-MRCNN by a significant margin on the EVICAN2 dataset.
  •  
7.
  • Khalid, Nabeel, et al. (författare)
  • DeepCIS : An end-to-end Pipeline for Cell-type aware Instance Segmentation in Microscopic Images
  • 2021
  • Ingår i: 2021 IEEE EMBS International Conference on Biomedical and Health Informatics, Proceedings. - : Institute of Electrical and Electronics Engineers (IEEE). - 9781665403580
  • Konferensbidrag (refereegranskat)abstract
    • Accurate cell segmentation in microscopic images is a useful tool to analyze individual cell behavior, which helps to diagnose human diseases and development of new treatments. Cell segmentation of individual cells in a microscopic image with many cells in view allows quantification of single cellular features, such as shape or movement patterns, providing rich insight into cellular heterogeneity. Most of the cell segmentation algorithms up till now focus on segmenting cells in the images without classifying the culture of the cell in the images. Discrimination among cell types in microscopic images can lead to a new era of high-throughput cell microscopy. Multiple cell types in co-culture can be easily identified and studying the changes in cell morphology can lead to many applications such as drug treatment. To address this gap, DeepCIS is proposed to detect, segment, and classify the culture of the cells and nucleus in the microscopic images. We have used the EVICAN60 dataset which contains microscopic images from a variety of microscopes having numerous cell cultures, to evaluate the proposed pipeline. To further demonstrate the utility of the DeepCIS, we have designed various experimental settings to uncover its learning potential. We have achieved a mean average precision score of 24.37% for the segmentation task averaged over 30 classes for cell and nucleus.
  •  
8.
  • Khalid, Nabeel, et al. (författare)
  • DeepMuCS: A framework for co-culture microscopic image analysis : from generation to segmentation
  • 2022
  • Ingår i: 2022 IEEE-EMBS International Conference on Biomedical and Health Informatics (BHI). - : IEEE. - 9781665487917 ; , s. 1-4
  • Konferensbidrag (refereegranskat)abstract
    • Discrimination between cell types in the co-culture environment with multiple cell lines can assist in examining the interaction between different cell populations. Identifying different cell cultures in addition to cell segmentation in co-culture is essential for understanding the cellular mechanisms associated with disease states. In drug development, biologists are more interested in co-culture models because they replicate the tumor environment in vivo better than the monoculture models. Additionally, they have a measurable effect on cancer cell response to treatment. Co-culture models are critical for designing a drug with maximum efficacy on cancer while minimizing harm to the rest of the body. In the past, there existed minimal progress related to cell-type aware segmentation in the monoculture and no development whatsoever for the co-culture. The introduction of the LIVECell dataset has allowed us to perform experiments for cell-type-aware segmentation. However, it is composed of microscopic images in a monoculture environment. This paper presents a framework for co-culture microscopic image data generation, where each image can contain multiple cell cultures. The framework also presents a pipeline for culture-dependent cell segmentation in co-culture microscopic images. The extensive evaluation revealed that it is possible to achieve cell-type aware segmentation in co-culture microscopic images with good precision.
  •  
9.
  • Khalid, Nabeel, et al. (författare)
  • PACE : point annotation-based cell segmentation for efficient microscopic image analysis
  • 2023
  • Ingår i: Artificial Neural Networks and Machine Learning – ICANN 2023. - : Springer Nature. - 9783031442094 - 9783031442100 ; , s. 545-557
  • Konferensbidrag (refereegranskat)abstract
    • Cells are essential to life because they provide the functional, genetic, and communication mechanisms essential for the proper functioning of living organisms. Cell segmentation is pivotal for any biological hypothesis validation/analysis i.e., to get valuable insights into cell behavior, function, diagnosis, and treatment. Deep learning-based segmentation methods have high segmentation precision, however, need fully annotated segmentation masks for each cell annotated manually by the experts, which is very laborious and costly. Many approaches have been developed in the past to reduce the effort required to annotate the data manually and even though these approaches produce good results, there is still a noticeable difference in performance when compared to fully supervised methods. To fill that gap, a weakly supervised approach, PACE, is presented, which uses only the point annotations and the bounding box for each cell to perform cell instance segmentation. The proposed approach not only achieves 99.8% of the fully supervised performance, but it also surpasses the previous state-of-the-art by a margin of more than 4%.
  •  
10.
  • Khalid, Nabeel, et al. (författare)
  • Point2Mask : A Weakly Supervised Approach for Cell Segmentation Using Point Annotation
  • 2022
  • Ingår i: Medical image understanding and analysis. - Cham : Springer. - 9783031120527 - 9783031120534 ; , s. 139-153
  • Konferensbidrag (refereegranskat)abstract
    • Identifying cells in microscopic images is a crucial step toward studying image-based cell biology research. Cell instance segmentation provides an opportunity to study the shape, structure, form, and size of cells. Deep learning approaches for cell instance segmentation rely on the instance segmentation mask for each cell, which is a labor-intensive and expensive task. An ample amount of unlabeled microscopic data is available in the cell biology domain, but due to the tedious and exorbitant nature of the annotations needed for the cell instance segmentation approaches, the full potential of the data is not explored. This paper presents a weakly supervised approach, which can perform cell instance segmentation by using only point and bounding box-based annotation. This enormously reduces the annotation efforts. The proposed approach is evaluated on a benchmark dataset i.e., LIVECell, whereby only using a bounding box and randomly generated points on each cell, it achieved the mean average precision score of 43.53% which is as good as the full supervised segmentation method trained with complete segmentation mask. In addition, it is 3.71 times faster to annotate with a bounding box and point in comparison to full mask annotation.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-10 av 15

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy