SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Jiahao Lu) "

Sökning: WFRF:(Jiahao Lu)

  • Resultat 1-10 av 15
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Kristan, Matej, et al. (författare)
  • The first visual object tracking segmentation VOTS2023 challenge results
  • 2023
  • Ingår i: 2023 IEEE/CVF International conference on computer vision workshops (ICCVW). - : Institute of Electrical and Electronics Engineers Inc.. - 9798350307443 - 9798350307450 ; , s. 1788-1810
  • Konferensbidrag (refereegranskat)abstract
    • The Visual Object Tracking Segmentation VOTS2023 challenge is the eleventh annual tracker benchmarking activity of the VOT initiative. This challenge is the first to merge short-term and long-term as well as single-target and multiple-target tracking with segmentation masks as the only target location specification. A new dataset was created; the ground truth has been withheld to prevent overfitting. New performance measures and evaluation protocols have been created along with a new toolkit and an evaluation server. Results of the presented 47 trackers indicate that modern tracking frameworks are well-suited to deal with convergence of short-term and long-term tracking and that multiple and single target tracking can be considered a single problem. A leaderboard, with participating trackers details, the source code, the datasets, and the evaluation kit are publicly available at the challenge website1
  •  
2.
  • Kottwitz, Matthew, et al. (författare)
  • Local Structure and Electronic State of Atomically Dispersed Pt Supported on Nanosized CeO2
  • 2019
  • Ingår i: ACS Catalysis. - : AMER CHEMICAL SOC. - 2155-5435. ; 9:9, s. 8738-8748
  • Tidskriftsartikel (refereegranskat)abstract
    • Single atom catalysts (SACs) have shown high activity and selectivity in a growing number of chemical reactions. Many efforts aimed at unveiling the structure-property relationships underpinning these activities and developing synthesis methods for obtaining SACs with the desired structures are hindered by the paucity of experimental methods capable of probing the attributes of local structure, electronic properties, and interaction with support-features that comprise key descriptors of their activity. In this work, we describe a combination of experimental and theoretical approaches that include photon and electron spectroscopy, scattering, and imaging methods, linked by density functional theory calculations, for providing detailed and comprehensive information on the atomic structure and electronic properties of SACs. This characterization toolbox is demonstrated here using a model single atom Pt/CeO2 catalyst prepared via a sol-gel-based synthesis method. Isolated Pt atoms together with extra oxygen atoms passivate the (100) surface of nanosized ceria. A detailed picture of the local structure of Pt nearest environment emerges from this work involving the bonding of isolated Pt2+ ions at the hollow sites of perturbed (100) surface planes of the CeO2 support, as well as a substantial (and heretofore unrecognized) strain within the CeO2 lattice in the immediate vicinity of the Pt centers. The detailed information on structural attributes provided by our approach is the key for understanding and improving the properties of SACs.
  •  
3.
  • Lu, Jiahao, et al. (författare)
  • A Deep Learning based Pipeline for Efficient Oral Cancer Screening on Whole Slide Images
  • 2019
  • Annan publikation (övrigt vetenskapligt/konstnärligt)abstract
    • Oral cancer incidence is rapidly increasing worldwide. The most important determinant factor in cancer survival is early diagnosis. To facilitate large scale screening, we propose a fully automated end-to-end pipeline for oral cancer screening on whole slide cytology images. The pipeline consists of regression based nucleus detection, followed by per cell focus selection, and CNN based classification. We demonstrate that the pipeline provides fast and efficient cancer classification of whole slide cytology images, improving over previous results. The complete source code is made available as open source (https://github.com/MIDA-group/OralScreen).
  •  
4.
  • Lu, Jiahao, et al. (författare)
  • A Deep Learning Based Pipeline for Efficient Oral Cancer Screening on Whole Slide Images
  • 2020
  • Ingår i: Image Analysis and Recognition. - Cham : Springer International Publishing. - 9783030505158 - 9783030505165 ; , s. 249-261
  • Konferensbidrag (refereegranskat)abstract
    • Oral cancer incidence is rapidly increasing worldwide. The most important determinant factor in cancer survival is early diagnosis. To facilitate large scale screening, we propose a fully automated pipeline for oral cancer detection on whole slide cytology images. The pipeline consists of fully convolutional regression-based nucleus detection, followed by per-cell focus selection, and CNN based classification. Our novel focus selection step provides fast per-cell focus decisions at human-level accuracy. We demonstrate that the pipeline provides efficient cancer classification of whole slide cytology images, improving over previous results both in terms of accuracy and feasibility. The complete source code is made available as open source (https://github.com/MIDA-group/OralScreen).
  •  
5.
  • Lu, Jiahao, et al. (författare)
  • Image-to-Image Translation in Multimodal Image Registration: How Well Does It Work?
  • 2021
  • Konferensbidrag (övrigt vetenskapligt/konstnärligt)abstract
    • Despite current advancements in the field of biomedical image processing, propelled by the deep learning revolution, the registration of multimodal microscopy images, due to its specific challenges, is still often performed manually by specialists. Image-to-image (I2I) translation aims at transforming images from one domain while preserving their contents so they have the style of images from another domain. The recent success of I2I translation in computer vision applications and its growing use in biomedical areas provide a tempting possibility of transforming the multimodal registration problem into a, potentially easier, monomodal one.We have recently conducted an empirical study of the applicability of modern I2I translation methods for the task of multimodal biomedical image registration. We selected four Generative Adversarial Network (GAN)-based methods, which differ in terms of supervision requirement, design concepts, output quality and diversity, popularity, and scalability, and one contrastive representation learning method. The effectiveness of I2I translation for multimodal image registration is judged by comparing the performance of these five methods subsequently combined with two representative monomodal registration methods. We evaluate these method combinations on three publicly available multimodal datasets of increasing difficulty (including both cytological and histological images), and compare with the performance of registration by Mutual Information maximisation and one modern data-specific multimodal registration method.Our results suggest that, although I2I translation may be helpful when the modalities to register are clearly correlated, modalities which express distinctly different properties of the sample are not handled well enough. When less information is shared between the modalities, the I2I translation methods struggle to provide good predictions, which impairs the registration performance. They are all outperformed by the evaluated representation learning method, which aims to find an in-between representation, and also by the Mutual Information maximisation approach. We therefore conclude that current I2I approaches are, at this point, not suitable for multimodal biomedical image registration.Further details, including the code, datasets and the complete experimental setup can be found at https://github.com/MIDA-group/MultiRegEval.
  •  
6.
  • Lu, Jiahao, et al. (författare)
  • Is image-to-image translation the panacea for multimodal image registration? : A comparative study
  • 2022
  • Ingår i: PLOS ONE. - : Public Library of Science (PLoS). - 1932-6203. ; 17:11
  • Tidskriftsartikel (refereegranskat)abstract
    • Despite current advancement in the field of biomedical image processing, propelled by the deep learning revolution, multimodal image registration, due to its several challenges, is still often performed manually by specialists. The recent success of image-to-image (I2I) translation in computer vision applications and its growing use in biomedical areas provide a tempting possibility of transforming the multimodal registration problem into a, potentially easier, monomodal one. We conduct an empirical study of the applicability of modern I2I translation methods for the task of rigid registration of multimodal biomedical and medical 2D and 3D images. We compare the performance of four Generative Adversarial Network (GAN)-based I2I translation methods and one contrastive representation learning method, subsequently combined with two representative monomodal registration methods, to judge the effectiveness of modality translation for multimodal image registration. We evaluate these method combinations on four publicly available multimodal (2D and 3D) datasets and compare with the performance of registration achieved by several well-known approaches acting directly on multimodal image data. Our results suggest that, although I2I translation may be helpful when the modalities to register are clearly correlated, registration of modalities which express distinctly different properties of the sample are not well handled by the I2I translation approach. The evaluated representation learning method, which aims to find abstract image-like representations of the information shared between the modalities, manages better, and so does the Mutual Information maximisation approach, acting directly on the original multimodal images. We share our complete experimental setup as open-source (https://github.com/MIDA-group/MultiRegEval), including method implementations, evaluation code, and all datasets, for further reproducing and benchmarking.
  •  
7.
  • Lu, Jiahao, et al. (författare)
  • Is Image-to-Image Translation the Panacea for Multimodal Image Registration? A Comparative Study
  • 2021
  • Annan publikation (övrigt vetenskapligt/konstnärligt)abstract
    • Despite current advancement in the field of biomedical image processing, propelled by the deep learning revolution, multimodal image registration, due to its several challenges, is still often performed manually by specialists. The recent success of image-to-image (I2I) translation in computer vision applications and its growing use in biomedical areas provide a tempting possibility of transforming the multimodal registration problem into a, potentially easier, monomodal one. We conduct an empirical study of the applicability of modern I2I translation methods for the task of multimodal biomedical image registration. We compare the performance of four Generative Adversarial Network (GAN)-based methods and one contrastive representation learning method, subsequently combined with two representative monomodal registration methods, to judge the effectiveness of modality translation for multimodal image registration. We evaluate these method combinations on three publicly available multimodal datasets of increasing difficulty, and compare with the performance of registration by Mutual Information maximisation and one modern data-specific multimodal registration method. Our results suggest that, although I2I translation may be helpful when the modalities to register are clearly correlated, registration of modalities which express distinctly different properties of the sample are not well handled by the I2I translation approach. When less information is shared between the modalities, the I2I translation methods struggle to provide good predictions, which impairs the registration performance. The evaluated representation learning method, which aims to find an in-between representation, manages better, and so does the Mutual Information maximisation approach. We share our complete experimental setup as open-source https://github.com/Noodles-321/Registration.
  •  
8.
  • Luo, Zhenghui, et al. (författare)
  • Heteroheptacene-based acceptors with thieno[3,2-b]pyrrole yield high-performance polymer solar cells
  • 2022
  • Ingår i: NATIONAL SCIENCE REVIEW. - : Oxford University Press. - 2095-5138 .- 2053-714X. ; 9:7
  • Tidskriftsartikel (refereegranskat)abstract
    • Rationally utilizing and developing synthetic units is of particular significance for the design of high-performance non-fullerene small-molecule acceptors (SMAs). Here, a thieno[3,2-b]pyrrole synthetic unit was employed to develop a set of SMAs (ThPy1, ThPy2, ThPy3 and ThPy4) by changing the number or the position of the pyrrole ring in the central core based on a standard SMA of IT-4Cl, compared to which the four thieno[3,2-b]pyrrole-based acceptors exhibit bathochromic absorption and upshifted frontier orbital energy level due to the strong electron-donating ability of pyrrole. As a result, the polymer solar cells (PSCs) of the four thieno[3,2-b]pyrrole-based acceptors yield higher open-circuit voltage and lower energy loss relative to those of the IT-4Cl-based device. What is more, the ThPy3-based device achieves a power conversion efficiency (PCE) (15.3%) and an outstanding fill factor (FF) (0.771) that are superior to the IT-4Cl-based device (PCE = 12.6%, FF = 0.758). The ThPy4-based device realizes the lowest energy loss and the smallest optical band gap, and the ternary PSC device based on PM6:BTP-eC9:ThPy4 exhibits a PCE of 18.43% and a FF of 0.802. Overall, this work sheds light on the great potential of thieno[3,2-b]pyrrole-based SMAs in realizing low energy loss and high PCE. Four heteroheptacene-based acceptors using thieno[3,2-b]pyrrole building block were developed for the first time, and all the four acceptors-based devices realized high performance and low energy loss.
  •  
9.
  • Pielawski, Nicolas, et al. (författare)
  • CoMIR: Contrastive Multimodal Image Representation for Registration
  • 2020
  • Ingår i: NeurIPS - 34th Conference on Neural Information Processing Systems.
  • Konferensbidrag (refereegranskat)abstract
    • We propose contrastive coding to learn shared, dense image representations, referred to as CoMIRs (Contrastive Multimodal Image Representations). CoMIRs enable the registration of multimodal images where existing registration methods often fail due to a lack of sufficiently similar image structures. CoMIRs reduce the multimodal registration problem to a monomodal one, in which general intensity-based, as well as feature-based, registration algorithms can be applied. The method involves training one neural network per modality on aligned images, using a contrastive loss based on noise-contrastive estimation (InfoNCE). Unlike other contrastive coding methods, used for, e.g., classification, our approach generates image-like representations that contain the information shared between modalities. We introduce a novel, hyperparameter-free modification to InfoNCE, to enforce rotational equivariance of the learnt representations, a property essential to the registration task. We assess the extent of achieved rotational equivariance and the stability of the representations with respect to weight initialization, training set, and hyperparameter settings, on a remote sensing dataset of RGB and near-infrared images. We evaluate the learnt representations through registration of a biomedical dataset of bright-field and second-harmonic generation microscopy images; two modalities with very little apparent correlation. The proposed approach based on CoMIRs significantly outperforms registration of representations created by GAN-based image-to-image translation, as well as a state-of-the-art, application-specific method which takes additional knowledge about the data into account. Code is available at: https://github.com/MIDA-group/CoMIR.
  •  
10.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-10 av 15

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy