SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Yang Ming Hsuan) "

Sökning: WFRF:(Yang Ming Hsuan)

  • Resultat 1-10 av 17
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Beal, Jacob, et al. (författare)
  • Robust estimation of bacterial cell count from optical density
  • 2020
  • Ingår i: Communications Biology. - : Springer Science and Business Media LLC. - 2399-3642. ; 3:1
  • Tidskriftsartikel (refereegranskat)abstract
    • Optical density (OD) is widely used to estimate the density of cells in liquid culture, but cannot be compared between instruments without a standardized calibration protocol and is challenging to relate to actual cell count. We address this with an interlaboratory study comparing three simple, low-cost, and highly accessible OD calibration protocols across 244 laboratories, applied to eight strains of constitutive GFP-expressing E. coli. Based on our results, we recommend calibrating OD to estimated cell count using serial dilution of silica microspheres, which produces highly precise calibration (95.5% of residuals <1.2-fold), is easily assessed for quality control, also assesses instrument effective linear range, and can be combined with fluorescence calibration to obtain units of Molecules of Equivalent Fluorescein (MEFL) per cell, allowing direct comparison and data fusion with flow cytometry measurements: in our study, fluorescence per cell measurements showed only a 1.07-fold mean difference between plate reader and flow cytometry data.
  •  
2.
  • Kristanl, Matej, et al. (författare)
  • The Seventh Visual Object Tracking VOT2019 Challenge Results
  • 2019
  • Ingår i: 2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW). - : IEEE COMPUTER SOC. - 9781728150239 ; , s. 2206-2241
  • Konferensbidrag (refereegranskat)abstract
    • The Visual Object Tracking challenge VOT2019 is the seventh annual tracker benchmarking activity organized by the VOT initiative. Results of 81 trackers are presented; many are state-of-the-art trackers published at major computer vision conferences or in journals in the recent years. The evaluation included the standard VOT and other popular methodologies for short-term tracking analysis as well as the standard VOT methodology for long-term tracking analysis. The VOT2019 challenge was composed of five challenges focusing on different tracking domains: (i) VOT-ST2019 challenge focused on short-term tracking in RGB, (ii) VOT-RT2019 challenge focused on "real-time" short-term tracking in RGB, (iii) VOT-LT2019 focused on long-term tracking namely coping with target disappearance and reappearance. Two new challenges have been introduced: (iv) VOT-RGBT2019 challenge focused on short-term tracking in RGB and thermal imagery and (v) VOT-RGBD2019 challenge focused on long-term tracking in RGB and depth imagery. The VOT-ST2019, VOT-RT2019 and VOT-LT2019 datasets were refreshed while new datasets were introduced for VOT-RGBT2019 and VOT-RGBD2019. The VOT toolkit has been updated to support both standard short-term, long-term tracking and tracking with multi-channel imagery. Performance of the tested trackers typically by far exceeds standard baselines. The source code for most of the trackers is publicly available from the VOT page. The dataset, the evaluation kit and the results are publicly available at the challenge website(1).
  •  
3.
  • Kristan, Matej, et al. (författare)
  • The first visual object tracking segmentation VOTS2023 challenge results
  • 2023
  • Ingår i: 2023 IEEE/CVF International conference on computer vision workshops (ICCVW). - : Institute of Electrical and Electronics Engineers Inc.. - 9798350307443 - 9798350307450 ; , s. 1788-1810
  • Konferensbidrag (refereegranskat)abstract
    • The Visual Object Tracking Segmentation VOTS2023 challenge is the eleventh annual tracker benchmarking activity of the VOT initiative. This challenge is the first to merge short-term and long-term as well as single-target and multiple-target tracking with segmentation masks as the only target location specification. A new dataset was created; the ground truth has been withheld to prevent overfitting. New performance measures and evaluation protocols have been created along with a new toolkit and an evaluation server. Results of the presented 47 trackers indicate that modern tracking frameworks are well-suited to deal with convergence of short-term and long-term tracking and that multiple and single target tracking can be considered a single problem. A leaderboard, with participating trackers details, the source code, the datasets, and the evaluation kit are publicly available at the challenge website1
  •  
4.
  • Kristan, Matej, et al. (författare)
  • The Sixth Visual Object Tracking VOT2018 Challenge Results
  • 2019
  • Ingår i: Computer Vision – ECCV 2018 Workshops. - Cham : Springer Publishing Company. - 9783030110086 - 9783030110093 ; , s. 3-53
  • Konferensbidrag (refereegranskat)abstract
    • The Visual Object Tracking challenge VOT2018 is the sixth annual tracker benchmarking activity organized by the VOT initiative. Results of over eighty trackers are presented; many are state-of-the-art trackers published at major computer vision conferences or in journals in the recent years. The evaluation included the standard VOT and other popular methodologies for short-term tracking analysis and a “real-time” experiment simulating a situation where a tracker processes images as if provided by a continuously running sensor. A long-term tracking subchallenge has been introduced to the set of standard VOT sub-challenges. The new subchallenge focuses on long-term tracking properties, namely coping with target disappearance and reappearance. A new dataset has been compiled and a performance evaluation methodology that focuses on long-term tracking capabilities has been adopted. The VOT toolkit has been updated to support both standard short-term and the new long-term tracking subchallenges. Performance of the tested trackers typically by far exceeds standard baselines. The source code for most of the trackers is publicly available from the VOT page. The dataset, the evaluation kit and the results are publicly available at the challenge website (http://votchallenge.net).
  •  
5.
  • Kristan, Matej, et al. (författare)
  • The Visual Object Tracking VOT2017 challenge results
  • 2017
  • Ingår i: 2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW 2017). - : IEEE. - 9781538610343 ; , s. 1949-1972
  • Konferensbidrag (refereegranskat)abstract
    • The Visual Object Tracking challenge VOT2017 is the fifth annual tracker benchmarking activity organized by the VOT initiative. Results of 51 trackers are presented; many are state-of-the-art published at major computer vision conferences or journals in recent years. The evaluation included the standard VOT and other popular methodologies and a new "real-time" experiment simulating a situation where a tracker processes images as if provided by a continuously running sensor. Performance of the tested trackers typically by far exceeds standard baselines. The source code for most of the trackers is publicly available from the VOT page. The VOT2017 goes beyond its predecessors by (i) improving the VOT public dataset and introducing a separate VOT2017 sequestered dataset, (ii) introducing a realtime tracking experiment and (iii) releasing a redesigned toolkit that supports complex experiments. The dataset, the evaluation kit and the results are publicly available at the challenge website(1).
  •  
6.
  • Dudhane, Akshay, et al. (författare)
  • Burst Image Restoration and Enhancement
  • 2022
  • Ingår i: 2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022). - : IEEE COMPUTER SOC. - 9781665469463 - 9781665469470 ; , s. 5749-5758
  • Konferensbidrag (refereegranskat)abstract
    • Modern handheld devices can acquire burst image sequence in a quick succession. However, the individual acquired frames suffer from multiple degradations and are misaligned due to camera shake and object motions. The goal of Burst Image Restoration is to effectively combine complimentary cues across multiple burst frames to generate high-quality outputs. Towards this goal, we develop a novel approach by solely focusing on the effective information exchange between burst frames, such that the degradations get filtered out while the actual scene details are preserved and enhanced. Our central idea is to create a set of pseudo-burst features that combine complimentary information from all the input burst frames to seamlessly exchange information. However, the pseudo-burst cannot be successfully created unless the individual burst frames are properly aligned to discount interframe movements. Therefore, our approach initially extracts pre-processed features from each burst frame and matches them using an edge-boosting burst alignment module. The pseudo-burst features are then created and enriched using multi-scale contextual information. Our final step is to adaptively aggregate information from the pseudo-burst features to progressively increase resolution in multiple stages while merging the pseudo-burst features. In comparison to existing works that usually follow a late fusion scheme with single-stage upsampling, our approach performs favorably, delivering state-of-the-art performance on burst super-resolution, burst low-light image enhancement and burst denoising tasks. The source code and pre-trained models are available at https://github.com/akshaydudhane16/BIPNet.
  •  
7.
  • Dudhane, Akshay, et al. (författare)
  • Burstormer: Burst Image Restoration and Enhancement Transformer
  • 2023
  • Ingår i: 2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR. - : IEEE COMPUTER SOC. - 9798350301298 - 9798350301304 ; , s. 5703-5712
  • Konferensbidrag (refereegranskat)abstract
    • On a shutter press, modern handheld cameras capture multiple images in rapid succession and merge them to generate a single image. However, individual frames in a burst are misaligned due to inevitable motions and contain multiple degradations. The challenge is to properly align the successive image shots and merge their complimentary information to achieve high-quality outputs. Towards this direction, we propose Burstormer: a novel transformer-based architecture for burst image restoration and enhancement. In comparison to existing works, our approach exploits multi-scale local and non-local features to achieve improved alignment and feature fusion. Our key idea is to enable inter-frame communication in the burst neighborhoodsf or information aggregation and progressive fusion while modeling the burst-wide context. However, the input burst frames need to be properly aligned before fusing their information. Therefore, we propose an enhanced deformable alignment module for aligning burst features with regards to the reference frame. Unlike existing methods, the proposed alignment module not only aligns burst features but also exchanges feature information and maintains focused communication with the reference frame through the proposed reference-based feature enrichment mechanism, which facilitates handling complex motions. Aft er multi-level alignment and enrichment, we re-emphasize on inter-frame communication within burst using a cyclic burst sampling module. Finally, the inter-frame information is aggregated using the proposed burst feature fusion module followed by progressive upsampling. Our Burstormer outperforms state-of-the-art methods on burst super-resolution, burst denoising and burst low-light enhancement. Our codes and pre-trained models are available at https://github.com/akshaydudhane16/Burstormer.
  •  
8.
  • Duffy, Stephen W., et al. (författare)
  • Beneficial effect of consecutive screening mammography examinations on mortality from breast cancer : a prospective study
  • 2021
  • Ingår i: Radiology. - : Radiological Society of North America (RSNA). - 0033-8419 .- 1527-1315. ; 299:3, s. 541-547
  • Tidskriftsartikel (refereegranskat)abstract
    • Background: Previously, the risk of death from breast cancer was analyzed for women participating versus those not participating in the last screening examination before breast cancer diagnosis. Consecutive attendance patterns may further refine estimates.Purpose: To estimate the effect of participation in successive mammographic screening examinations on breast cancer mortality.Materials and Methods: Participation data for Swedish women eligible for screening mammography in nine counties from 1992 to 2016 were linked with data from registries and regional cancer centers for breast cancer diagnosis, cause, and date of death (Uppsala University ethics committee registration number: 2017/147). Incidence-based breast cancer mortality was calculated by whether the women had participated in the most recent screening examination prior to diagnosis only (intermittent participants), the penultimate screening examination only (lapsed participants), both examinations (serial participants), or neither examination (serial nonparticipants). Rates were analyzed with Poisson regression. We also analyzed incidence of breast cancers proving fatal within 10 years.Results: Data were available for a total average population of 549 091 women (average age, 58.9 years 6 6.7 [standard deviation]). The numbers of participants in the four groups were as follows: serial participants, 392 135; intermittent participants, 41 746; lapsed participants, 30 945; and serial nonparticipants, 84 265. Serial participants had a 49% lower risk of breast cancer mortality (relative risk [RR], 0.51; 95% CI: 0.48, 0.55; P ,.001) and a 50% lower risk of death from breast cancer within 10 years of diagnosis (RR, 0.50; 95% CI: 0.46, 0.55; P ,.001) than serial nonparticipants. Lapsed and intermittent participants had a smaller reduction. Serial participants had significantly lower risk of both outcomes than lapsed or intermittent participants. Analyses correcting for potential biases made little difference to the results.Conclusion: Women participating in the last two breast cancer screening examinations prior to breast cancer diagnosis had the largest reduction in breast cancer death. Missing either one of the last two examinations conferred a significantly higher risk.
  •  
9.
  • Khan, Salman, et al. (författare)
  • Guest Editorial Introduction to the Special Section on Transformer Models in Vision
  • 2023
  • Ingår i: IEEE Transactions on Pattern Analysis and Machine Intelligence. - : IEEE COMPUTER SOC. - 0162-8828 .- 1939-3539. ; 45:11, s. 12721-12725
  • Tidskriftsartikel (övrigt vetenskapligt/konstnärligt)abstract
    • Transformer models have achieved outstanding results on a variety of language tasks, such as text classification, ma- chine translation, and question answering. This success in the field of Natural Language Processing (NLP) has sparked interest in the computer vision community to apply these models to vision and multi-modal learning tasks. However, visual data has a unique structure, requiring the need to rethink network designs and training methods. As a result, Transformer models and their variations have been suc- cessfully used for image recognition, object detection, seg- mentation, image super-resolution, video understanding, image generation, text-image synthesis, and visual question answering, among other applications.
  •  
10.
  • Khattak, Muhammad Uzair, et al. (författare)
  • Self-regulating Prompts: Foundational Model Adaptation without Forgetting
  • 2023
  • Ingår i: 2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023). - : IEEE COMPUTER SOC. - 9798350307184 - 9798350307191 ; , s. 15144-15154
  • Konferensbidrag (refereegranskat)abstract
    • Prompt learning has emerged as an efficient alternative for fine-tuning foundational models, such as CLIP, for various downstream tasks. Conventionally trained using the task-specific objective, i.e., cross-entropy loss, prompts tend to overfit downstream data distributions and find it challenging to capture task-agnostic general features from the frozen CLIP. This leads to the loss of the model's original generalization capability. To address this issue, our work introduces a self-regularization framework for prompting called PromptSRC (Prompting with Self-regulating Constraints). PromptSRC guides the prompts to optimize for both task-specific and task-agnostic general representations using a three-pronged approach by: (a) regulating prompted representations via mutual agreement maximization with the frozen model, (b) regulating with selfensemble of prompts over the training trajectory to encode their complementary strengths, and (c) regulating with textual diversity to mitigate sample diversity imbalance with the visual branch. To the best of our knowledge, this is the first regularization framework for prompt learning that avoids overfitting by jointly attending to pre-trained model features, the training trajectory during prompting, and the textual diversity. PromptSRC explicitly steers the prompts to learn a representation space that maximizes performance on downstream tasks without compromising CLIP generalization. We perform extensive experiments on 4 benchmarks where PromptSRC overall performs favorably well compared to the existing methods. Our code and pre-trained models are publicly available at: https://github.com/muzairkhattak/PromptSRC.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-10 av 17
Typ av publikation
konferensbidrag (11)
tidskriftsartikel (5)
annan publikation (1)
Typ av innehåll
refereegranskat (15)
övrigt vetenskapligt/konstnärligt (2)
Författare/redaktör
Yang, Ming-Hsuan (13)
Khan, Fahad (9)
Khan, Salman (8)
Bhat, Goutam (4)
Matas, Jiri (4)
Fernandez, Gustavo (4)
visa fler...
Wang, Dong (3)
van de Weijer, Joost (3)
Khan, Fahad Shahbaz, ... (3)
Danelljan, Martin (3)
Zamir, Syed Waqas (3)
Eldesokey, Abdelrahm ... (3)
Kristan, Matej (3)
Leonardis, Ales (3)
Pflugfelder, Roman (3)
Wang, Fei (2)
Wang, Qiang (2)
Li, Jing (2)
Chen, Yan (2)
Li, Xin (2)
Mishra, Deepak (2)
Lagging, Martin, 196 ... (2)
Aleman, Soo (2)
Alghamdi, Abdullah S ... (2)
Dudhane, Akshay (2)
Yang, Fan (2)
Anwer, Rao Muhammad (2)
Coppola, Nicola (2)
Felsberg, Michael (2)
Torr, Philip H.S. (2)
Gao, Jie (2)
Zeuzem, Stefan (2)
Li, Bo (2)
Jia, Jidong (2)
Berg, Thomas (2)
Tacke, Frank (2)
Bai, Shuai (2)
Wu, Yi (2)
Felsberg, Michael, 1 ... (2)
Cholakkal, Hisham (2)
Wang, Ning (2)
Aghemo, Alessio (2)
Zhao, Fei (2)
Van Gool, Luc (2)
Zhao, Jie (2)
Buti, Maria (2)
Yang, Lingxiao (2)
Craxi, Antonio (2)
Bowden, Richard (2)
Vojır, Tomas (2)
visa färre...
Lärosäte
Linköpings universitet (13)
Karolinska Institutet (3)
Göteborgs universitet (2)
Umeå universitet (2)
Uppsala universitet (1)
Örebro universitet (1)
visa fler...
Chalmers tekniska högskola (1)
visa färre...
Språk
Engelska (17)
Forskningsämne (UKÄ/SCB)
Naturvetenskap (13)
Medicin och hälsovetenskap (3)
Teknik (1)

År

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy