SwePub
Tyck till om SwePub Sök här!
Sök i LIBRIS databas

  Utökad sökning

WFRF:(Khan Wasim)
 

Sökning: WFRF:(Khan Wasim) > Self-regulating Pro...

Self-regulating Prompts: Foundational Model Adaptation without Forgetting

Khattak, Muhammad Uzair (författare)
Mohamed Bin Zayed Univ AI, U Arab Emirates
Wasim, Syed Talal (författare)
Mohamed Bin Zayed Univ AI, U Arab Emirates
Naseer, Muzammal (författare)
Mohamed Bin Zayed Univ AI, U Arab Emirates
visa fler...
Khan, Salman (författare)
Mohamed Bin Zayed Univ AI, U Arab Emirates; Australian Natl Univ, Australia
Yang, Ming-Hsuan (författare)
Univ Calif, CA USA; Google Res, CA USA
Khan, Fahad (författare)
Linköpings universitet,Datorseende,Tekniska fakulteten,Mohamed Bin Zayed Univ AI, U Arab Emirates
visa färre...
 (creator_code:org_t)
IEEE COMPUTER SOC, 2023
2023
Engelska.
Ingår i: 2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023). - : IEEE COMPUTER SOC. - 9798350307184 - 9798350307191 ; , s. 15144-15154
  • Konferensbidrag (refereegranskat)
Abstract Ämnesord
Stäng  
  • Prompt learning has emerged as an efficient alternative for fine-tuning foundational models, such as CLIP, for various downstream tasks. Conventionally trained using the task-specific objective, i.e., cross-entropy loss, prompts tend to overfit downstream data distributions and find it challenging to capture task-agnostic general features from the frozen CLIP. This leads to the loss of the model's original generalization capability. To address this issue, our work introduces a self-regularization framework for prompting called PromptSRC (Prompting with Self-regulating Constraints). PromptSRC guides the prompts to optimize for both task-specific and task-agnostic general representations using a three-pronged approach by: (a) regulating prompted representations via mutual agreement maximization with the frozen model, (b) regulating with selfensemble of prompts over the training trajectory to encode their complementary strengths, and (c) regulating with textual diversity to mitigate sample diversity imbalance with the visual branch. To the best of our knowledge, this is the first regularization framework for prompt learning that avoids overfitting by jointly attending to pre-trained model features, the training trajectory during prompting, and the textual diversity. PromptSRC explicitly steers the prompts to learn a representation space that maximizes performance on downstream tasks without compromising CLIP generalization. We perform extensive experiments on 4 benchmarks where PromptSRC overall performs favorably well compared to the existing methods. Our code and pre-trained models are publicly available at: https://github.com/muzairkhattak/PromptSRC.

Ämnesord

NATURVETENSKAP  -- Data- och informationsvetenskap -- Annan data- och informationsvetenskap (hsv//swe)
NATURAL SCIENCES  -- Computer and Information Sciences -- Other Computer and Information Science (hsv//eng)

Publikations- och innehållstyp

ref (ämneskategori)
kon (ämneskategori)

Hitta via bibliotek

Till lärosätets databas

Sök utanför SwePub

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy