SwePub
Tyck till om SwePub Sök här!
Sök i LIBRIS databas

  Utökad sökning

WFRF:(Khan Wasim)
 

Sökning: WFRF:(Khan Wasim) > Video-FocalNets: Sp...

Video-FocalNets: Spatio-Temporal Focal Modulation for Video Action Recognition

Wasim, Syed Talal (författare)
Mohamed Bin Zayed Univ AI, U Arab Emirates
Khattak, Muhammad Uzair (författare)
Mohamed Bin Zayed Univ AI, U Arab Emirates
Naseer, Muzammal (författare)
Mohamed Bin Zayed Univ AI, U Arab Emirates
visa fler...
Khan, Salman (författare)
Mohamed Bin Zayed Univ AI, U Arab Emirates; Australian Natl Univ, Australia
Shah, Mubarak (författare)
Univ Cent Florida, FL 32816 USA
Khan, Fahad (författare)
Linköpings universitet,Datorseende,Tekniska fakulteten,Mohamed Bin Zayed Univ AI, U Arab Emirates
visa färre...
 (creator_code:org_t)
IEEE COMPUTER SOC, 2023
2023
Engelska.
Ingår i: 2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023). - : IEEE COMPUTER SOC. - 9798350307184 - 9798350307191 ; , s. 13732-13743
  • Konferensbidrag (refereegranskat)
Abstract Ämnesord
Stäng  
  • Recent video recognition models utilize Transformer models for long-range spatio-temporal context modeling. Video transformer designs are based on self-attention that can model global context at a high computational cost. In comparison, convolutional designs for videos offer an efficient alternative but lack long-range dependency modeling. Towards achieving the best of both designs, this work proposes Video-FocalNet, an effective and efficient architecture for video recognition that models both local and global contexts. Video-FocalNet is based on a spatio-temporal focal modulation architecture that reverses the interaction and aggregation steps of self-attention for better efficiency. Further, the aggregation step and the interaction step are both implemented using efficient convolution and element-wise multiplication operations that are computationally less expensive than their self-attention counterparts on video representations. We extensively explore the design space of focal modulation-based spatio-temporal context modeling and demonstrate our parallel spatial and temporal encoding design to be the optimal choice. Video-FocalNets perform favorably well against the state-of-the-art transformer-based models for video recognition on five large-scale datasets (Kinetics-400, Kinetics-600, SS-v2, Diving-48, and ActivityNet-1.3) at a lower computational cost. Our code/models are released at https://github.com/TalalWasim/Video-FocalNets.

Ämnesord

NATURVETENSKAP  -- Data- och informationsvetenskap -- Annan data- och informationsvetenskap (hsv//swe)
NATURAL SCIENCES  -- Computer and Information Sciences -- Other Computer and Information Science (hsv//eng)

Publikations- och innehållstyp

ref (ämneskategori)
kon (ämneskategori)

Hitta via bibliotek

Till lärosätets databas

Sök utanför SwePub

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy