SwePub
Sök i LIBRIS databas

  Extended search

onr:"swepub:oai:lup.lub.lu.se:981ccdb2-f80d-4ce4-b744-690fe33f6107"
 

Search: onr:"swepub:oai:lup.lub.lu.se:981ccdb2-f80d-4ce4-b744-690fe33f6107" > Points to patches: ...

  • 1 of 1
  • Previous record
  • Next record
  •    To hitlist

Points to patches: Enabling the use of self-attention for 3D shape recognition

Berg, Axel (author)
Lund University,Lunds universitet,Matematik LTH,Matematikcentrum,Institutioner vid LTH,Lunds Tekniska Högskola,Mathematics (Faculty of Engineering),Centre for Mathematical Sciences,Departments at LTH,Faculty of Engineering, LTH
Oskarsson, Magnus (author)
Lund University,Lunds universitet,Mathematical Imaging Group,Forskargrupper vid Lunds universitet,Matematik LTH,Matematikcentrum,Institutioner vid LTH,Lunds Tekniska Högskola,Lund University Research Groups,Mathematics (Faculty of Engineering),Centre for Mathematical Sciences,Departments at LTH,Faculty of Engineering, LTH
O'Connor, Mark (author)
ARM
 (creator_code:org_t)
2022
2022
English 7 s.
In: 2022 26th International Conference on Pattern Recognition (ICPR). - 2831-7475 .- 1051-4651. - 9781665490627 - 9781665490627 ; , s. 528-534
  • Conference paper (peer-reviewed)
Abstract Subject headings
Close  
  • While the Transformer architecture has become ubiquitous in the machine learning field, its adaptation to 3D shape recognition is non-trivial. Due to its quadratic computational complexity, the self-attention operator quickly becomes inefficient as the set of input points grows larger. Furthermore, we find that the attention mechanism struggles to find useful connections between individual points on a global scale. In order to alleviate these problems, we propose a two-stage Point Transformer-in-Transformer (Point-TnT) approach which combines local and global attention mechanisms, enabling both individual points and patches of points to attend to each other effectively. Experiments on shape classification show that such an approach provides more useful features for downstream tasks than the baseline Transformer, while also being more computationally efficient. In addition, we also extend our method to feature matching for scene reconstruction, showing that it can be used in conjunction with existing scene reconstruction pipelines.

Subject headings

NATURVETENSKAP  -- Data- och informationsvetenskap -- Datorseende och robotik (hsv//swe)
NATURAL SCIENCES  -- Computer and Information Sciences -- Computer Vision and Robotics (hsv//eng)

Publication and Content Type

kon (subject category)
ref (subject category)

Find in a library

To the university's database

  • 1 of 1
  • Previous record
  • Next record
  •    To hitlist

Find more in SwePub

By the author/editor
Berg, Axel
Oskarsson, Magnu ...
O'Connor, Mark
About the subject
NATURAL SCIENCES
NATURAL SCIENCES
and Computer and Inf ...
and Computer Vision ...
Articles in the publication
2022 26th Intern ...
By the university
Lund University

Search outside SwePub

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Close

Copy and save the link in order to return to this view