SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Lansner Anders Professor 1949 ) "

Sökning: WFRF:(Lansner Anders Professor 1949 )

  • Resultat 1-10 av 14
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Stathis, Dimitrios, 1989- (författare)
  • Synchoros VLSI Design Style
  • 2022
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Computers have become essential to everyday life as much as electricity, communications and transport. That is evident from the amount of electricity we spend to power our computing systems. According to some reports it is estimated to be ≈ 7% of the total consumption worldwide. This trend is very worrisome, and the development of computing systems with lower power consumption is essential. This is even more important for battery-powered computers deployed in the field. The industry and the scientific community have realised that general-purpose computing platforms cannot offer that level of computational efficiency and that customisation is the solution to this problem. Application-Specific Integrated Circuits (ASICs) provide the highest efficiency in the mainstream implementation styles. ASICs have been shown to provide 100 to 1000× better computational efficiency than general-purpose computing platforms. However, the design cost of ASICs restricts it to products that have a large volume or large profit. In essence, to achieve ASIC-like computational efficiency, the design efficiency becomes the bottleneck. SynchorosVLSI design has been proposed to non-incrementally lower the design cost of custom ASIC-like solutions. The synchoros VLSI design is a novel concept that can reduce the design cost of ASICs and their manufacturing. Insynchoros design, the space is discretised, and the final design emerges by the abutment of synchoros micro-architecture level design objects called SiLago(Silicon Lego) blocks. The SiLago framework has the potential to reduce the design cost of ASICs and their manufacturing. This thesis makes three research areas of contributions toward synchoros VLSI design. The first area concerns composition by abutment. In this contribution, a design has been proposed to show how a clock tree can be created by abutting fragments inside the SiLago blocks. Additionally, the clock tree created by abutment was validated by the EDA tools and its cost metrics compared to the functionally equivalent clock tree created by the conventional EDA flows. The second area is to enhance the micro-architectural framework. These contributions include SiLago blocks tailored for neural network computation and architectural enhancements to improve the efficiency of executing streaming applications in the SiLago framework. Furthermore, a novel genome recognition application based on a self-organising map (SOM) was also mapped to the SiLago framework. The third area of contribution is implementing a model of cortex as a tiled ASIC design using custom 3D DRAM vaults for synaptic storage. This work is preparatory work to identify the SiLago blocks needed to support the implementation of spiking neuromorphic structures and in general applications of ordinary differential equations.
  •  
2.
  • Chrysanthidis, Nikolaos, et al. (författare)
  • Traces of Semantization, from Episodic to Semantic Memory in a Spiking Cortical Network Model
  • 2022
  • Ingår i: eNeuro. - : Society for Neuroscience. - 2373-2822. ; 9:4
  • Tidskriftsartikel (refereegranskat)abstract
    • Episodic memory is a recollection of past personal experiences associated with particular times and places. This kind of memory is commonly subject to loss of contextual information or “semantization,” which gradually decouples the encoded memory items from their associated contexts while transforming them into semantic or gist-like representations. Novel extensions to the classical Remember/Know (R/K) behavioral paradigm attribute the loss of episodicity to multiple exposures of an item in different contexts. Despite recent advancements explaining semantization at a behavioral level, the underlying neural mechanisms remain poorly understood. In this study, we suggest and evaluate a novel hypothesis proposing that Bayesian–Hebbian synaptic plasticity mechanisms might cause semantization of episodic memory. We implement a cortical spiking neural network model with a Bayesian–Hebbian learning rule called Bayesian Confidence Propagation Neural Network (BCPNN), which captures the semantization phenomenon and offers a mechanistic explanation for it. Encoding items across multiple contexts leads to item-context decoupling akin to semantization. We compare BCPNN plasticity with the more commonly used spike-timing-dependent plasticity (STDP) learning rule in the same episodic memory task. Unlike BCPNN, STDP does not explain the decontextualization process. We further examine how selective plasticity modulation of isolated salient events may enhance preferential retention and resistance to semantization. Our model reproduces important features of episodicity on behavioral timescales under various biological constraints while also offering a novel neural and synaptic explanation for semantization, thereby casting new light on the interplay between episodic and semantic memory processes. 
  •  
3.
  • Lansner, Anders, Professor, 1949-, et al. (författare)
  • Fast Hebbian plasticity and working memory
  • 2023
  • Ingår i: Current Opinion in Neurobiology. - : Elsevier Ltd. - 0959-4388 .- 1873-6882. ; 83
  • Tidskriftsartikel (refereegranskat)abstract
    • Theories and models of working memory (WM) were at least since the mid-1990s dominated by the persistent activity hypothesis. The past decade has seen rising concerns about the shortcomings of sustained activity as the mechanism for short-term maintenance of WM information in the light of accumulating experimental evidence for so-called activity-silent WM and the fundamental difficulty in explaining robust multi-item WM. In consequence, alternative theories are now explored mostly in the direction of fast synaptic plasticity as the underlying mechanism. The question of non-Hebbian vs Hebbian synaptic plasticity emerges naturally in this context. In this review, we focus on fast Hebbian plasticity and trace the origins of WM theories and models building on this form of associative learning.
  •  
4.
  •  
5.
  • Martinez Mayorquin, Ramon Heberto (författare)
  • Sequence learning in the Bayesian Confidence Propagation Neural Network
  • 2022
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • This thesis examines sequence learning in the Bayesian Confidence PropagationNeural Network (BCPNN). The methodology utilized throughout this work is com-putational and analytical in nature and the contributions here presented can beunderstood along the following four major themes: 1) this work starts by revisitingthe properties of the BCPNN as an attractor neural network and then provides anovel formalization of some of those properties. First, a bayesian theoretical frame-work for the lower bounds in the BCPNN. Second, a differential formulation ofthe BCPNN plasticity rule that highlights its relationship to similar rules in thelearning literature. Third, closed form analytical results for the BCPNN trainingprocess. 2) After that, this work describes how the addition of an adaptation processto the BCPNN enables its sequence recall capabilities. The specific mechanisms ofsequence learning are then studied in detail as well as the properties of sequencerecall such as the persistence time (how long does the network last in a specific stateduring sequence recall) and its robustness to noise. 3) This work also shows howthe BCPNN can be enhanced with memory traces of the activity (z-traces) to pro-vide the network with disambiguation capabilities. 4) Finally, this works provides acomputational study to quantify the number of the sequences that the BCPNN canstore successfully. Alongside these central themes, results concerning robustness,stability and the relationship between the learned patterns and the input statisticsare presented in either computational or analytical form. The thesis concludes witha discussion of the sequence learning capabilities of the BCPNN in the context of thewider literature and describes both his advantages and disadvantages with respectto other attractor neural networks.
  •  
6.
  • Pereira, Patricia, et al. (författare)
  • Incremental Attractor Neural Network Modelling of the Lifespan Retrieval Curve
  • 2022
  • Ingår i: 2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN). - : Institute of Electrical and Electronics Engineers (IEEE).
  • Konferensbidrag (refereegranskat)abstract
    • The human lifespan retrieval curve describes the proportion of recalled memories from each year of life. It exhibits a reminiscence bump - a tendency for aged people to better recall memories formed during their young adulthood than from other periods of life. We have modelled this using an attractor Bayesian Confidence Propagation Neural Network (BCPNN) with incremental learning. We systematically studied the synaptic mechanisms underlying the reminiscence bump in this network model after introduction of an exponential decay of the synaptic learning rate and examined its sensitivity to network size and other relevant modelling mechanisms. The most influential parameters turned out to be the synaptic learning rate at birth and the time constant of its exponential decay with age, which set the bump position in the lifespan retrieval curve. The other parameters mainly influenced the general magnitude of this curve. Furthermore, we introduced the parametrization of the recency phenomenon - the tendency to better remember the most recent memories - reflected in the curve's upwards tail in the later years of the lifespan. Such recency was achieved by adding a constant baseline component to the exponentially decaying synaptic learning rate.
  •  
7.
  • Podobas, Artur, et al. (författare)
  • StreamBrain : An HPC Framework for Brain-like Neural Networks on CPUs, GPUs and FPGAs
  • 2021
  • Ingår i: ACM International Conference Proceeding Series. - New York, NY, USA : Association for Computing Machinery (ACM).
  • Konferensbidrag (refereegranskat)abstract
    • The modern deep learning method based on backpropagation has surged in popularity and has been used in multiple domains and application areas. At the same time, there are other - less-known - machine learning algorithms with a mature and solid theoretical foundation whose performance remains unexplored. One such example is the brain-like Bayesian Confidence Propagation Neural Network (BCPNN). In this paper, we introduce StreamBrain - a framework that allows neural networks based on BCPNN to be practically deployed in High-Performance Computing systems. StreamBrain is a domain-specific language (DSL), similar in concept to existing machine learning (ML) frameworks, and supports backends for CPUs, GPUs, and even FPGAs. We empirically demonstrate that StreamBrain can train the well-known ML benchmark dataset MNIST within seconds, and we are the first to demonstrate BCPNN on STL-10 size networks. We also show how StreamBrain can be used to train with custom floating-point formats and illustrate the impact of using different bfloat variations on BCPNN using FPGAs.
  •  
8.
  • Ravichandran, Naresh Balaji, et al. (författare)
  • Brain-like Combination of Feedforward and Recurrent Network Components Achieves Prototype Extraction and Robust Pattern Recognition
  • 2023
  • Ingår i: Lecture Notes in Computer Science. - Cham : Springer Nature. ; , s. 488-501
  • Konferensbidrag (refereegranskat)abstract
    • Associative memory has been a prominent candidate for the computation performed by the massively recurrent neocortical networks. Attractor networks implementing associative memory have offered mechanistic explanation for many cognitive phenomena. However, attractor memory models are typically trained using orthogonal or random patterns to avoid interference between memories, which makes them unfeasible for naturally occurring complex correlated stimuli like images. We approach this problem by combining a recurrent attractor network with a feedforward network that learns distributed representations using an unsupervised Hebbian-Bayesian learning rule. The resulting network model incorporates many known biological properties: unsupervised learning, Hebbian plasticity, sparse distributed activations, sparse connectivity, columnar and laminar cortical architecture, etc. We evaluate the synergistic effects of the feedforward and recurrent network components in complex pattern recognition tasks on the MNIST handwritten digits dataset. We demonstrate that the recurrent attractor component implements associative memory when trained on the feedforward-driven internal (hidden) representations. The associative memory is also shown to perform prototype extraction from the training data and make the representations robust to severely distorted input. We argue that several aspects of the proposed integration of feedforward and recurrent computations are particularly attractive from a machine learning perspective.
  •  
9.
  • Ravichandran, Naresh Balaji, et al. (författare)
  • Semi-supervised learning with Bayesian Confidence Propagation Neural Network
  • 2021
  • Ingår i: ESANN 2021 Proceedings - 29th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning. - : i6doc.com publication. ; , s. 441-446
  • Konferensbidrag (refereegranskat)abstract
    • Learning internal representations from data using no or few labels is useful for machine learning research, as it allows using massive amounts of unlabeled data. In this work, we use the Bayesian Confidence Propagation Neural Network (BCPNN) model developed as a biologically plausible model of the cortex. Recent work has demonstrated that these networks can learn useful internal representations from data using local Bayesian-Hebbian learning rules. In this work, we show how such representations can be leveraged in a semi-supervised setting by introducing and comparing different classifiers. We also evaluate and compare such networks with other popular semi-supervised classifiers. 
  •  
10.
  • Stathis, Dimitrios, et al. (författare)
  • Approximate computation of post-synaptic spikes reduces bandwidth to synaptic storage in a model of cortex
  • 2021
  • Ingår i: PROCEEDINGS OF THE 2021 DESIGN, AUTOMATION & TEST IN EUROPE CONFERENCE & EXHIBITION (DATE 2021). - : Institute of Electrical and Electronics Engineers (IEEE). ; , s. 685-688
  • Konferensbidrag (refereegranskat)abstract
    • The Bayesian Confidence Propagation Neural Network (BCPNN) is a spiking model of the cortex. The synaptic weights of BCPNN are organized as matrices. They require substantial synaptic storage and a large bandwidth to it. The algorithm requires a dual access pattern to these matrices, both row-wise and column-wise, to access its synaptic weights. In this work, we exploit an algorithmic optimization that eliminates the column-wise accesses. The new computation model approximates the post-synaptic spikes computation with the use of a predictor. We have adopted this approximate computational model to improve upon the previously reported ASIC implementation, called eBrainII. We also present the error analysis of the approximation to show that it is negligible. The reduction in storage and bandwidth to the synaptic storage results in a 48% reduction in energy compared to eBrainII. The reported approximation method also applies to other neural network models based on a Hebbian learning rule.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-10 av 14

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy