SwePub
Sök i LIBRIS databas

  Extended search

onr:"swepub:oai:DiVA.org:liu-190653"
 

Search: onr:"swepub:oai:DiVA.org:liu-190653" > Burst Image Restora...

  • 1 of 1
  • Previous record
  • Next record
  •    To hitlist
  • Dudhane, AkshayMohamed Bin Zayed Univ AI, U Arab Emirates (author)

Burst Image Restoration and Enhancement

  • Article/chapterEnglish2022

Publisher, publication year, extent ...

  • IEEE COMPUTER SOC,2022
  • printrdacarrier

Numbers

  • LIBRIS-ID:oai:DiVA.org:liu-190653
  • https://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-190653URI
  • https://doi.org/10.1109/CVPR52688.2022.00567DOI

Supplementary language notes

  • Language:English
  • Summary in:English

Part of subdatabase

Classification

  • Subject category:ref swepub-contenttype
  • Subject category:kon swepub-publicationtype

Notes

  • Funding Agencies|NSF CAREER Grant [1149783]
  • Modern handheld devices can acquire burst image sequence in a quick succession. However, the individual acquired frames suffer from multiple degradations and are misaligned due to camera shake and object motions. The goal of Burst Image Restoration is to effectively combine complimentary cues across multiple burst frames to generate high-quality outputs. Towards this goal, we develop a novel approach by solely focusing on the effective information exchange between burst frames, such that the degradations get filtered out while the actual scene details are preserved and enhanced. Our central idea is to create a set of pseudo-burst features that combine complimentary information from all the input burst frames to seamlessly exchange information. However, the pseudo-burst cannot be successfully created unless the individual burst frames are properly aligned to discount interframe movements. Therefore, our approach initially extracts pre-processed features from each burst frame and matches them using an edge-boosting burst alignment module. The pseudo-burst features are then created and enriched using multi-scale contextual information. Our final step is to adaptively aggregate information from the pseudo-burst features to progressively increase resolution in multiple stages while merging the pseudo-burst features. In comparison to existing works that usually follow a late fusion scheme with single-stage upsampling, our approach performs favorably, delivering state-of-the-art performance on burst super-resolution, burst low-light image enhancement and burst denoising tasks. The source code and pre-trained models are available at https://github.com/akshaydudhane16/BIPNet.

Subject headings and genre

Added entries (persons, corporate bodies, meetings, titles ...)

  • Zamir, Syed WaqasIncept Inst AI, U Arab Emirates (author)
  • Khan, SalmanMohamed Bin Zayed Univ AI, U Arab Emirates; Australian Natl Univ, Australia (author)
  • Khan, FahadLinköpings universitet,Datorseende,Tekniska fakulteten,Mohamed Bin Zayed Univ AI, U Arab Emirates(Swepub:liu)fahkh30 (author)
  • Yang, Ming-HsuanUniv Calif Merced, CA USA; Yonsei Univ, South Korea; Google Res, CA USA (author)
  • Mohamed Bin Zayed Univ AI, U Arab EmiratesIncept Inst AI, U Arab Emirates (creator_code:org_t)

Related titles

  • In:2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022): IEEE COMPUTER SOC, s. 5749-575897816654694639781665469470

Internet link

Find in a library

To the university's database

  • 1 of 1
  • Previous record
  • Next record
  •    To hitlist

Search outside SwePub

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Close

Copy and save the link in order to return to this view