SwePub
Sök i LIBRIS databas

  Utökad sökning

WFRF:(Reniers Michel A.)
 

Sökning: WFRF:(Reniers Michel A.) > (2020-2024) > Fine-tuned CLIP Mod...

Fine-tuned CLIP Models are Efficient Video Learners

Rasheed, Hanoona (författare)
Mohamed Bin Zayed Univ AI, U Arab Emirates
Khattak, Muhammad Uzair (författare)
Mohamed Bin Zayed Univ AI, U Arab Emirates
Maaz, Muhammad (författare)
Mohamed Bin Zayed Univ AI, U Arab Emirates
visa fler...
Khan, Salman (författare)
Mohamed Bin Zayed Univ AI, U Arab Emirates; Australian Natl Univ, Australia
Khan, Fahad (författare)
Linköpings universitet,Datorseende,Tekniska fakulteten,Mohamed Bin Zayed Univ AI, U Arab Emirates
visa färre...
 (creator_code:org_t)
IEEE COMPUTER SOC, 2023
2023
Engelska.
Ingår i: 2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR. - : IEEE COMPUTER SOC. - 9798350301298 - 9798350301304 ; , s. 6545-6554
  • Konferensbidrag (refereegranskat)
Abstract Ämnesord
Stäng  
  • Large-scale multi-modal training with image-text pairs imparts strong generalization to CLIP model. Since training on a similar scale for videos is infeasible, recent approaches focus on the effective transfer of image-based CLIP to the video domain. In this pursuit, new parametric modules are added to learn temporal information and inter-frame relationships which require meticulous design efforts. Furthermore, when the resulting models are learned on videos, they tend to overfit on the given task distribution and lack in generalization aspect. This begs the following question: How to effectively transfer image-level CLIP representations to videos? In this work, we show that a simple Video Fine-tuned CLIP (ViFi-CLIP) baseline is generally sufficient to bridge the domain gap from images to videos. Our qualitative analysis illustrates that the framelevel processing from CLIP image-encoder followed by feature pooling and similarity matching with corresponding text embeddings helps in implicitly modeling the temporal cues within ViFi-CLIP. Such fine-tuning helps the model to focus on scene dynamics, moving objects and inter-object relationships. For low-data regimes where full fine-tuning is not viable, we propose a bridge and prompt approach that first uses fine-tuning to bridge the domain gap and then learns prompts on language and vision side to adapt CLIP representations. We extensively evaluate this simple yet strong baseline on zero-shot, base-to-novel generalization, few-shot and fully supervised settings across five video benchmarks. Our code and pre-trained models are available at https://github.com/muzairkhattak/ViFi-CLIP.

Ämnesord

NATURVETENSKAP  -- Data- och informationsvetenskap -- Datorseende och robotik (hsv//swe)
NATURAL SCIENCES  -- Computer and Information Sciences -- Computer Vision and Robotics (hsv//eng)

Publikations- och innehållstyp

ref (ämneskategori)
kon (ämneskategori)

Hitta via bibliotek

Till lärosätets databas

Sök utanför SwePub

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy