SwePub
Sök i LIBRIS databas

  Extended search

id:"swepub:oai:DiVA.org:kth-329366"
 

Search: id:"swepub:oai:DiVA.org:kth-329366" > Brain-like Combinat...

  • 1 of 1
  • Previous record
  • Next record
  •    To hitlist
  • Ravichandran, Naresh BalajiKTH,Beräkningsvetenskap och beräkningsteknik (CST),Computational Cognitive Brain Science Group (author)

Brain-like Combination of Feedforward and Recurrent Network Components Achieves Prototype Extraction and Robust Pattern Recognition

  • Article/chapterEnglish2023

Publisher, publication year, extent ...

  • 2023-03-10
  • Cham :Springer Nature,2023
  • printrdacarrier

Numbers

  • LIBRIS-ID:oai:DiVA.org:kth-329366
  • https://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-329366URI
  • https://doi.org/10.1007/978-3-031-25891-6_37DOI
  • https://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-326225URI

Supplementary language notes

  • Language:English
  • Summary in:English

Part of subdatabase

Classification

  • Subject category:ref swepub-contenttype
  • Subject category:kon swepub-publicationtype

Notes

  • QC 20230621
  • QC 20230503
  • Associative memory has been a prominent candidate for the computation performed by the massively recurrent neocortical networks. Attractor networks implementing associative memory have offered mechanistic explanation for many cognitive phenomena. However, attractor memory models are typically trained using orthogonal or random patterns to avoid interference between memories, which makes them unfeasible for naturally occurring complex correlated stimuli like images. We approach this problem by combining a recurrent attractor network with a feedforward network that learns distributed representations using an unsupervised Hebbian-Bayesian learning rule. The resulting network model incorporates many known biological properties: unsupervised learning, Hebbian plasticity, sparse distributed activations, sparse connectivity, columnar and laminar cortical architecture, etc. We evaluate the synergistic effects of the feedforward and recurrent network components in complex pattern recognition tasks on the MNIST handwritten digits dataset. We demonstrate that the recurrent attractor component implements associative memory when trained on the feedforward-driven internal (hidden) representations. The associative memory is also shown to perform prototype extraction from the training data and make the representations robust to severely distorted input. We argue that several aspects of the proposed integration of feedforward and recurrent computations are particularly attractive from a machine learning perspective.

Subject headings and genre

Added entries (persons, corporate bodies, meetings, titles ...)

  • Lansner, Anders,Professor,1949-KTH,Beräkningsvetenskap och beräkningsteknik (CST),Stockholm Univ, Dept Math, Stockholm, Sweden.,Computational Cognitive Brain Science Group(Swepub:kth)u12s8cr8 (author)
  • Herman, Pawel,1979-KTH,Beräkningsvetenskap och beräkningsteknik (CST),Computational Cognitive Brain Science Group(Swepub:kth)u19pqm1e (author)
  • KTHBeräkningsvetenskap och beräkningsteknik (CST) (creator_code:org_t)

Related titles

  • In:Lecture Notes in Computer ScienceCham : Springer Nature, s. 488-501

Internet link

To the university's database

  • 1 of 1
  • Previous record
  • Next record
  •    To hitlist

Find more in SwePub

By the author/editor
Ravichandran, Na ...
Lansner, Anders, ...
Herman, Pawel, 1 ...
About the subject
NATURAL SCIENCES
NATURAL SCIENCES
and Mathematics
and Computational Ma ...
ENGINEERING AND TECHNOLOGY
ENGINEERING AND ...
and Electrical Engin ...
and Computer Systems
Articles in the publication
By the university
Royal Institute of Technology

Search outside SwePub

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Close

Copy and save the link in order to return to this view