SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "hsv:(TEKNIK OCH TEKNOLOGIER) hsv:(Medicinteknik) hsv:(Medicinsk bildbehandling) ;pers:(Wählby Carolina)"

Sökning: hsv:(TEKNIK OCH TEKNOLOGIER) hsv:(Medicinteknik) hsv:(Medicinsk bildbehandling) > Wählby Carolina

  • Resultat 1-10 av 62
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Allalou, Amin, 1981-, et al. (författare)
  • Approaches for increasing throughput andinformation content of image-based zebrafishscreens
  • 2011
  • Ingår i: Proceeding of SSBA 2011.
  • Konferensbidrag (övrigt vetenskapligt/konstnärligt)abstract
    • Microscopy in combination with image analysis has emerged as one of the most powerful and informativeways to analyze cell-based high-throughput screening (HTS) samples in experiments designed to uncover novel drugs and drug targets. However, many diseases and biological pathways can be better studied in whole animals, particularly diseases and pathways that involve organ systems and multicellular interactions, such as organ development, neuronal degeneration and regeneration, cancer metastasis, infectious disease progression and pathogenesis. The zebrafish is a wide-spread and popular vertebrate model of human organfunction and development, and it is unique in the sense that large-scale in vivo genetic and chemical studies are feasible due in part to its small size, optical transparency,and aquatic habitat. To improve the throughput and complexity of zebrafish screens, a high-throughput platform for cellular-resolution in vivo chemical and genetic screens on zebrafish larvae has been developed at Yanik lab at Research Laboratory of Electronics, MIT, USA. The system loads live zebrafish from reservoirs or multiwell plates, positions and rotates them for high-speed confocal imaging of organs,and dispenses the animals without damage. We present two improvements to the described system, including automation of positioning of the animals and a novel approach for brightfield microscopy tomographic imaging of living animals.
  •  
2.
  • Allalou, Amin, 1981- (författare)
  • Methods for 2D and 3D Quantitative Microscopy of Biological Samples
  • 2011
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • New microscopy techniques are continuously developed, resulting in more rapid acquisition of large amounts of data. Manual analysis of such data is extremely time-consuming and many features are difficult to quantify without the aid of a computer. But with automated image analysis biologists can extract quantitative measurements and increases throughput significantly, which becomes particularly important in high-throughput screening (HTS). This thesis addresses automation of traditional analysis of cell data as well as automation of both image capture and analysis in zebrafish high-throughput screening. It is common in microscopy images to stain the nuclei in the cells, and to label the DNA and proteins in different ways. Padlock-probing and proximity ligation are highly specific detection methods that  produce point-like signals within the cells. Accurate signal detection and segmentation is often a key step in analysis of these types of images. Cells in a sample will always show some degree of variation in DNA and protein expression and to quantify these variations each cell has to be analyzed individually. This thesis presents development and evaluation of single cell analysis on a range of different types of image data. In addition, we present a novel method for signal detection in three dimensions. HTS systems often use a combination of microscopy and image analysis to analyze cell-based samples. However, many diseases and biological pathways can be better studied in whole animals, particularly those that involve organ systems and multi-cellular interactions. The zebrafish is a widely-used vertebrate model of human organ function and development. Our collaborators have developed a high-throughput platform for cellular-resolution in vivo chemical and genetic screens on zebrafish larvae. This thesis presents improvements to the system, including accurate positioning of the fish which incorporates methods for detecting regions of interest, making the system fully automatic. Furthermore, the thesis describes a novel high-throughput tomography system for screening live zebrafish in both fluorescence and bright field microscopy. This 3D imaging approach combined with automatic quantification of morphological changes enables previously intractable high-throughput screening of vertebrate model organisms.
  •  
3.
  • Andersson, Axel, et al. (författare)
  • ISTDECO : In Situ Transcriptomics Decoding by Deconvolution
  • Annan publikation (övrigt vetenskapligt/konstnärligt)abstract
    • In Situ Transcriptomics (IST) is a set of image-based transcriptomics approaches that enables localisation of gene expression directly in tissue samples. IST techniques produce multiplexed image series in which fluorescent spots are either present or absent across imaging rounds and colour channels. A spot’spresence and absence form a type of barcoded pattern that labels a particular type of mRNA. Therefore, the expression of agene can be determined by localising the fluorescent spots and decode the barcode that they form. Existing IST algorithms usually do this in two separate steps: spot localisation and barcode decoding. Although these algorithms are efficient, they are limited by strictly separating the localisation and decoding steps. This limitation becomes apparent in regions with low signal-to-noise ratio or high spot densities. We argue that an improved gene expression decoding can be obtained by combining these two steps into a single algorithm. This allows for an efficient decoding that is less sensitive to noise and optical crowding. We present IST Decoding by Deconvolution (ISTDECO), a principled decoding approach combining spectral and spatial deconvolution into a single algorithm. We evaluate ISTDECOon simulated data, as well as on two real IST datasets, and compare with state-of-the-art. ISTDECO achieves state-of-the-art performance despite high spot densities and low signal-to-noise ratios. It is easily implemented and runs efficiently using a GPU.ISTDECO implementation, datasets and demos are available online at: github.com/axanderssonuu/istdeco
  •  
4.
  • Bengtsson, Ewert, 1948-, et al. (författare)
  • Detection of Malignancy-Associated Changes Due to Precancerous and Oral Cancer Lesions: A Pilot Study Using Deep Learning
  • 2018
  • Ingår i: CYTO2018.
  • Konferensbidrag (refereegranskat)abstract
    • Background: The incidence of oral cancer is increasing and it is effecting younger individuals. PAP smear-based screening, visual, and automated, have been used for decades, to successfully decrease the incidence of cervical cancer. Can similar methods be used for oral cancer screening? We have carried out a pilot study using neural networks for classifying cells, both from cervical cancer and oral cancer patients. The results which were reported from a technical point of view at the 2017 IEEE International Conference on Computer Vision Workshop (ICCVW), were particularly interesting for the oral cancer cases, and we are currently collecting and analyzing samples from more patients. Methods: Samples were collected with a brush in the oral cavity and smeared on glass slides, stained, and prepared, according to standard PAP procedures. Images from the slides were digitized with a 0.35 micron pixel size, using focus stacks with 15 levels 0.4 micron apart. Between 245 and 2,123 cell nuclei were manually selected for analysis for each of 14 datasets, usually 2 datasets for each of the 6 cases, in total around 15,000 cells. A small region was cropped around each nucleus, and the best 2 adjacent focus layers in each direction were automatically found, thus creating images of 100x100x5 pixels. Nuclei were chosen with an aim to select well preserved free-lying cells, with no effort to specifically select diagnostic cells. We therefore had no ground truth on the cellular level, only on the patient level. Subsets of these images were used for training 2 sets of neural networks, created according to the ResNet and VGG architectures described in literature, to distinguish between cells from healthy persons, and those with precancerous lesions. The datasets were augmented through mirroring and 90 degrees rotations. The resulting networks were used to classify subsets of cells from different persons, than those in the training sets. This was repeated for a total of 5 folds. Results: The results were expressed as the percentage of cell nuclei that the neural networks indicated as positive. The percentage of positive cells from healthy persons was in the range 8% to 38%. The percentage of positive cells collected near the lesions was in the range 31% to 96%. The percentages from the healthy side of the oral cavity of patients with lesions ranged 37% to 89%. For each fold, it was possible to find a threshold for the number of positive cells that would correctly classify all patients as normal or positive, even for the samples taken from the healthy side of the oral cavity. The network based on the ResNet architecture showed slightly better performance than the VGG-based one. Conclusion: Our small pilot study indicates that malignancyassociated changes that can be detected by neural networks may exist among cells in the oral cavity of patients with precancerous lesions. We are currently collecting samples from more patients, and will present those results as well, with our poster at CYTO 2018.
  •  
5.
  • Bombrun, Maxime, et al. (författare)
  • A web application to analyse and visualize digital images at multiple resolutions
  • 2017
  • Konferensbidrag (övrigt vetenskapligt/konstnärligt)abstract
    • Computerised image processing and automated quantification of cell and tissue morphology are becoming important tools for complementing visual assessment when investigating disease and/or drug response. The distribution and organisation of cells in intact tissue samples provides a rich visual-cognitive combination of information at multiple resolutions. The lowest magnification describes specific architectural patterns in the global tissue organization. At the same time, new methods for in situ sequencing of RNA allows profiling of gene expression at cellular resolution. Analysis at multiple resolutions thus opens up for large-scale comparison of genotype and phenotype. Expressed genes are locally amplified by molecular probes and rolling circle amplification, and decoded by repeating the sequencing cycle for the four letters of the genetic code. Using image processing methodologies on these giga-pixel images (40000 x 48000 pixels), we have identified more than 40 genes in parallel in the same tissue sample. Here, we present an open-source tool which combines the quantification of cell and tissue morphology with the analysis of gene expression. Our framework builds on CellProfiler, a free and open-source software developed for image based screening, and our viewing platform allow experts to visualize both gene expression patterns and quantitative measurements of tissue morphology with different overlays, such as the commonly used H&E staining. Furthermore, the user can draw regions of interest and extract local statistics on gene expression and tissue morphology over large slide scanner images at different resolutions. The TissueMaps platform provides a flexible solution to support the future development of histopathology, both as a diagnostic tool and as a research field.
  •  
6.
  • Bombrun, Maxime, et al. (författare)
  • Decoding gene expression in 2D and 3D
  • 2017
  • Ingår i: Image Analysis. - Cham : Springer. - 9783319591285 ; , s. 257-268
  • Konferensbidrag (refereegranskat)abstract
    • Image-based sequencing of RNA molecules directly in tissue samples provides a unique way of relating spatially varying gene expression to tissue morphology. Despite the fact that tissue samples are typically cut in micrometer thin sections, modern molecular detection methods result in signals so densely packed that optical “slicing” by imaging at multiple focal planes becomes necessary to image all signals. Chromatic aberration, signal crosstalk and low signal to noise ratio further complicates the analysis of multiple sequences in parallel. Here a previous 2D analysis approach for image-based gene decoding was used to show how signal count as well as signal precision is increased when analyzing the data in 3D instead. We corrected the extracted signal measurements for signal crosstalk, and improved the results of both 2D and 3D analysis. We applied our methodologies on a tissue sample imaged in six fluorescent channels during five cycles and seven focal planes, resulting in 210 images. Our methods are able to detect more than 5000 signals representing 140 different expressed genes analyzed and decoded in parallel.
  •  
7.
  •  
8.
  • Bombrun, Maxime, et al. (författare)
  • TissueMaps : A large multi-scale data analysis platform for digital image application built on open-source software
  • 2016
  • Konferensbidrag (övrigt vetenskapligt/konstnärligt)abstract
    • Automated analysis of microscopy data and quantification of cell and tissue morphology has become an important tool for investigating disease and/or drug response. New methods of in situ sequencing of RNA allows profiling of gene expression at cellular resolution in intact tissue samples, and thus opens up for large-scale comparison of genotype and phenotype. Expressed genes are locally amplified by molecular probes and rolling circle amplification, and decoded by analysis of repeated imaging and sequencing cycles. Using image processing methodologies on these giga-pixel images (40000 x 48000 pixels), we have identified more than 40 genes in parallel in the same tissue sample. On the other hand, the distribution and organisation of cells in the tissue contain rich information at multiple resolutions. The lowest resolution describes the global tissue arrangement, while the cellular resolution allows us to quantify gene expression and morphology of individual cells.Here, we present an open-source tool which combine the analysis of gene expression with quantification of cell and tissue morphology. Our framework builds on CellProfiler, a free and open-source software developed for image based screening, and our viewing platform allow experts to visualize analysis results with different overlays, such as the commonly used H&E staining. Furthermore, the user can draw regions of interest and extract local statistics on gene expression and tissue morphology over large slide scanner images at different resolutions (Fig.1). The TissueMaps platform provides a flexible solution to support the future development of histopathology, both as a diagnostic tool and as a research field.
  •  
9.
  • Chelebian, Eduard, et al. (författare)
  • DEPICTER : Deep representation clustering for histology annotation
  • 2024
  • Ingår i: Computers in Biology and Medicine. - : Elsevier. - 0010-4825 .- 1879-0534. ; 170
  • Tidskriftsartikel (refereegranskat)abstract
    • Automatic segmentation of histopathology whole -slide images (WSI) usually involves supervised training of deep learning models with pixel -level labels to classify each pixel of the WSI into tissue regions such as benign or cancerous. However, fully supervised segmentation requires large-scale data manually annotated by experts, which can be expensive and time-consuming to obtain. Non -fully supervised methods, ranging from semi -supervised to unsupervised, have been proposed to address this issue and have been successful in WSI segmentation tasks. But these methods have mainly been focused on technical advancements in algorithmic performance rather than on the development of practical tools that could be used by pathologists or researchers in real -world scenarios. In contrast, we present DEPICTER (Deep rEPresentatIon ClusTERing), an interactive segmentation tool for histopathology annotation that produces a patch -wise dense segmentation map at WSI level. The interactive nature of DEPICTER leverages self- and semi -supervised learning approaches to allow the user to participate in the segmentation producing reliable results while reducing the workload. DEPICTER consists of three steps: first, a pretrained model is used to compute embeddings from image patches. Next, the user selects a number of benign and cancerous patches from the multi -resolution image. Finally, guided by the deep representations, label propagation is achieved using our novel seeded iterative clustering method or by directly interacting with the embedding space via feature space gating. We report both real-time interaction results with three pathologists and evaluate the performance on three public cancer classification dataset benchmarks through simulations. The code and demos of DEPICTER are publicly available at https://github.com/eduardchelebian/depicter.
  •  
10.
  • Chelebian, Eduard, et al. (författare)
  • Morphological Features Extracted by AI Associated with Spatial Transcriptomics in Prostate Cancer
  • 2021
  • Ingår i: Cancers. - : MDPI AG. - 2072-6694. ; 13:19
  • Tidskriftsartikel (refereegranskat)abstract
    • Simple Summary Prostate cancer has very varied appearances when examined under the microscope, and it is difficult to distinguish clinically significant cancer from indolent disease. In this study, we use computer analyses inspired by neurons, so-called 'neural networks', to gain new insights into the connection between how tissue looks and underlying genes which program the function of prostate cells. Neural networks are 'trained' to carry out specific tasks, and training requires large numbers of training examples. Here, we show that a network pre-trained on different data can still identify biologically meaningful regions, without the need for additional training. The neural network interpretations matched independent manual assessment by human pathologists, and even resulted in more refined interpretation when considering the relationship with the underlying genes. This is a new way to automatically detect prostate cancer and its genetic characteristics without the need for human supervision, which means it could possibly help in making better treatment decisions. Prostate cancer is a common cancer type in men, yet some of its traits are still under-explored. One reason for this is high molecular and morphological heterogeneity. The purpose of this study was to develop a method to gain new insights into the connection between morphological changes and underlying molecular patterns. We used artificial intelligence (AI) to analyze the morphology of seven hematoxylin and eosin (H & E)-stained prostatectomy slides from a patient with multi-focal prostate cancer. We also paired the slides with spatially resolved expression for thousands of genes obtained by a novel spatial transcriptomics (ST) technique. As both spaces are highly dimensional, we focused on dimensionality reduction before seeking associations between them. Consequently, we extracted morphological features from H & E images using an ensemble of pre-trained convolutional neural networks and proposed a workflow for dimensionality reduction. To summarize the ST data into genetic profiles, we used a previously proposed factor analysis. We found that the regions were automatically defined, outlined by unsupervised clustering, associated with independent manual annotations, in some cases, finding further relevant subdivisions. The morphological patterns were also correlated with molecular profiles and could predict the spatial variation of individual genes. This novel approach enables flexible unsupervised studies relating morphological and genetic heterogeneity using AI to be carried out.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-10 av 62
Typ av publikation
tidskriftsartikel (26)
konferensbidrag (24)
doktorsavhandling (7)
annan publikation (3)
forskningsöversikt (2)
Typ av innehåll
refereegranskat (42)
övrigt vetenskapligt/konstnärligt (19)
Författare/redaktör
Wählby, Carolina, pr ... (21)
Wählby, Carolina, 19 ... (9)
Lindblad, Joakim (8)
Ranefall, Petter (8)
Wieslander, Håkan (8)
visa fler...
Solorzano, Leslie, 1 ... (6)
Ranefall, Petter, 19 ... (6)
Sintorn, Ida-Maria, ... (6)
Nilsson, Mats (4)
Spjuth, Ola, Profess ... (4)
Sladoje, Nataša (4)
Bombrun, Maxime (4)
Wetzer, Elisabeth (4)
Wählby, Carolina, Do ... (3)
Bengtsson, Ewert, Pr ... (3)
Bengtsson, Ewert (3)
Qian, Xiaoyan (3)
Gupta, Ankit (3)
Runow Stark, Christi ... (3)
Kartasalo, Kimmo (3)
Ishaq, Omer (3)
Hellander, Andreas (2)
Kimani, Joshua (2)
Sintorn, Ida-Maria (2)
Karlsson, Johan (2)
Elf, Johan (2)
Söderberg, Ola (2)
Allalou, Amin, 1981- (2)
Pardo-Martin, Carlos (2)
Yanik, Mehmet Fatih (2)
Allalou, Amin (2)
Kampf, Caroline (2)
Avenel, Christophe (2)
Pacureanu, Alexandra (2)
Simonsson, Martin (2)
Zhou Hagström, Nanna (2)
Klemm, Anna H (2)
Harrison, Philip J (2)
Bengtsson, Ewert, 19 ... (2)
Forslid, Gustav (2)
Hirsch, Jan-Michael (2)
Broliden, Kristina (2)
Partel, Gabriele (2)
Tjernlund, Annelie (2)
Carreras-Puigvert, J ... (2)
Koos, Björn (2)
Chelebian, Eduard (2)
Oliveira, Carla (2)
Gavrilovic, Milan, 1 ... (2)
visa färre...
Lärosäte
Uppsala universitet (62)
Karolinska Institutet (8)
Stockholms universitet (3)
Kungliga Tekniska Högskolan (2)
Sveriges Lantbruksuniversitet (2)
Göteborgs universitet (1)
Språk
Engelska (62)
Forskningsämne (UKÄ/SCB)
Teknik (62)
Naturvetenskap (16)
Medicin och hälsovetenskap (15)

År

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy