SwePub
Tyck till om SwePub Sök här!
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "L773:1662 5196 ;lar1:(kth)"

Sökning: L773:1662 5196 > Kungliga Tekniska Högskolan

  • Resultat 1-10 av 17
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Brandi, Maya, et al. (författare)
  • Multiscale modeling through MUSIC multi-simulation : Modeling a dendritic spine using MOOSE and NeuroRD
  • 2011
  • Ingår i: Front. Neuroinform. Conference Abstract. - : Frontiers Media SA.
  • Konferensbidrag (refereegranskat)abstract
    • The nervous system encompasses structure and phenomena at different spatial and temporal scales from molecule to behavior. In addition, different scales are described by different physical and mathematical formalisms. The dynamics of second messenger pathways can be formulated as stochastic reaction-diffusion systems [1] while the electrical dynamics of the neuronal membrane is often described by compartment models and the Hodgkin-Huxley formalism. In neuroscience, there is an increasing need and interest to study multi-scale phenomena where multiple scales and physical formalisms are covered by a single model. While there exists simulators/frameworks, such as GENESIS and MOOSE [3], which span such scales (kinetikit/HH-models), most software applications are specialized for a given domain. Here, we report about initial steps towards a framework for multi-scale modeling which builds on the concept of multi-simulations [2]. We aim to provide a standard API and communication framework allowing parallel simulators targeted at different scales and/or different physics to communicate on-line in a cluster environment. Specifically, we show prototype work on simulating the effect on receptor induced cascades on membrane excitability. Electrical properties of a compartment model is simulated in MOOSE, while receptor induced cascades are simulated in NeuroRD  [4,7] . In a prototype system, the two simulators are connected using PyMOOSE [5] and JPype [6]. The two models with their different physical properties (membrane currents in MOOSE, molecular biophysics in NeuroRD), are joined into a single model system.  We demonstrate the interaction of the numerical solvers of two simulators (MOOSE, NeuroRD) targeted at different spatiotemporal scales and different physics while solving a multi-scale problem. We analyze some of the problems that may arise in multi-scale multi-simulations and present requirements for a generic API for parallel solvers. This work represents initial steps towards a flexible modular framework for simulation of large-scale multi-scale multi-physics problems in neuroscience. References 1. Blackwell KT: An efficient stochastic diffusion algorithm for modeling second messengers in dendrites and spines. J Neurosci Meth 2006, 157: 142-153. 2. Djurfeldt M, Hjorth J, Eppler JM, Dudani N, Helias M, Potjans TC, Bhalla US, Diesmann M, Hellgren Kotaleski J, Ekeberg Ö: Run-Time Interoperability Between Neural Network Simulators Based on the MUSIC Framework. Neurinform 2010, 8: 43-60. 3. Dudani N, Ray S, George S, Bhalla US: Multiscale modeling and interoperability in MOOSE. Neuroscience 2009, 10(Suppl 1): 54. 4. Oliveira RF, Terrin A, Di Benedetto G, Cannon RC, Koh W, Kim M, Zaccolo M, Blacwell KT: The Role of Type 4 Phosphdiesterases in Generating Microdomains of cAMP: Large Scale Stochastic Simulations.
  •  
2.
  • Djurfeldt, Mikael, et al. (författare)
  • Efficient generation of connectivity in neuronal networks from simulator-independent descriptions
  • 2014
  • Ingår i: Frontiers in Neuroinformatics. - : Frontiers Media SA. - 1662-5196. ; 8, s. 43-
  • Tidskriftsartikel (refereegranskat)abstract
    • Simulator-independent descriptions of connectivity in neuronal networks promise greater ease of model sharing, improved reproducibility of simulation results, and reduced programming effort for computational neuroscientists. However, until now, enabling the use of such descriptions in a given simulator in a computationally efficient way has entailed considerable work for simulator developers, which must be repeated for each new connectivity-generating library that is developed. We have developed a generic connection generator interface that provides a standard way to connect a connectivity-generating library to a simulator, such that one library can easily be replaced by another, according to the modeler's needs. We have used the connection generator interface to connect C++ and Python implementations of the previously described connection-set algebra to the NEST simulator. We also demonstrate how the simulator-independent modeling framework PyNN can transparently take advantage of this, passing a connection description through to the simulator layer for rapid processing in C++ where a simulator supports the connection generator interface and falling-back to slower iteration in Python otherwise. A set of benchmarks demonstrates the good performance of the interface.
  •  
3.
  • Djurfeldt, Mikael, 1967-, et al. (författare)
  • Large-scale modeling - a tool for conquering the complexity of the brain
  • 2008
  • Ingår i: Frontiers in Neuroinformatics. - : Frontiers Media SA. - 1662-5196. ; 2, s. 1-4
  • Tidskriftsartikel (refereegranskat)abstract
    • Is there any hope of achieving a thorough understanding of higher functions such as perception, memory, thought and emotion or is the stunning complexity of the brain a barrier which will limit such efforts for the foreseeable future? In this perspective we discuss methods to handle complexity, approaches to model building, and point to detailed large-scale models as a new contribution to the toolbox of the computational neuroscientist. We elucidate some aspects which distinguishes large-scale models and some of the technological challenges which they entail.
  •  
4.
  •  
5.
  • Hjorth, Johannes, et al. (författare)
  • GABAergic control of dendritic calcium dynamics in striatal medium spiny neurons
  • 2008
  • Ingår i: Frontiers in Neuroinformatics. - : Frontiers Media SA. - 1662-5196.
  • Konferensbidrag (refereegranskat)abstract
    • Experiments have demonstrated the ability of action potentials to actively backpropagate in striatal medium spiny (MS) neurons, affecting the calcium levels in the dendrites [1, 2, 3]. Increased calcium levels trigger changes in plasticity [4, 5], which is important for learning and other functions [6]. Studies in the hippocampus have shown that GABAergic input can modulate the backpropagation of action potentials from the soma to the distal dendrites [7]. The MS neurons receive both proximal feedforward GABAergic inhibition from fast spiking interneurons (FS), and distal feedback inhibition from other neighbouring MS neurons. In the present study the effect of GABAergic inputs on the dendritic calcium dynamics is investigated.
  •  
6.
  • Hjorth, Johannes, et al. (författare)
  • The influence of stuttering properties for firing activity in pairs of electrically coupled striatal fast-spiking interneurons
  • 2009
  • Ingår i: Neuroinformatics 2009. Pilsen, Czech Republic, September 06 - 08,  2009. - : Frontiers Media SA.
  • Konferensbidrag (övrigt vetenskapligt/konstnärligt)abstract
    • The striatum is the main input stage of the basal ganglia system, which is involved in executive functions of the forebrain – such as the planning and the selection of motor behavior. Feedforward inhibition of medium-sized spiny projection neurons in the striatum by fast-spiking interneurons is supposed to be an important determinant of controlling striatal output to later stages of the basal ganglia [1]. Striatal fast-spiking interneurons, which constitute approximately 1-2 % of all striatal neurons, show many similarities to cortical fast-spiking cells. In response to somatic current injection, for example, some of these neurons exhibit spike bursts with a variable number of action potentials (so called stuttering) [2-4]. Interestingly, the membrane potential between such stuttering episodes oscillates in the range of 20-100 Hz [3,5]. The first spike of each stuttering episode invariably occurs at a peak of the underlying subthreshold oscillation. In both cortex and striatum, fast-spiking cells have been shown to be inter-connected by gap junctions [6,7]. In vitro measurements as well as theoretical studies indicate that electrical coupling via gap junctions might be able to promote synchronous activity among these neurons [6,8].Here we use computational modeling to investigate how the presence of subthreshold oscillations and stuttering properties influence the synchronization of activity in pairs of electrically coupled fast-spiking neurons. We use the model of Golomb et al. [3], which we have extended with a dendritic tree in order to be able to simulate distal synaptic input. We show that gap junctions are able to synchronize both subthreshold membrane potential fluctuations as well as the stuttering periods in response to somatic current injection. In response to synaptic input, however, our model neuron rarely shows subthreshold oscillations, and the stuttering behavior changes to a firing pattern with single spikes or spike doublets. We furthermore investigate the effect of GABAergic (i.e. inhibitory) input to the model of the fast-spiking neuron and predict that inhibitory input is able to induce overlapping stuttering episodes in these cells. We finally discuss our results in the context of the feedforward inhibitory network which is likely to play an important role in striatal and basal ganglia function.
  •  
7.
  •  
8.
  • Jordan, Jakob, et al. (författare)
  • Extremely Scalable Spiking Neuronal Network Simulation Code : From Laptops to Exascale Computers
  • 2018
  • Ingår i: Frontiers in Neuroinformatics. - : Frontiers Media SA. - 1662-5196. ; 12
  • Tidskriftsartikel (refereegranskat)abstract
    • State-of-the-art software tools for neuronal network simulations scale to the largest computing systems available today and enable investigations of large-scale networks of up to 10 % of the human cortex at a resolution of individual neurons and synapses. Due to an upper limit on the number of incoming connections of a single neuron, network connectivity becomes extremely sparse at this scale. To manage computational costs, simulation software ultimately targeting the brain scale needs to fully exploit this sparsity. Here we present a two-tier connection infrastructure and a framework for directed communication among compute nodes accounting for the sparsity of brain-scale networks. We demonstrate the feasibility of this approach by implementing the technology in the NEST simulation code and we investigate its performance in different scaling scenarios of typical network simulations. Our results show that the new data structures and communication scheme prepare the simulation kernel for post-petascale high-performance computing facilities without sacrificing performance in smaller systems.
  •  
9.
  • Klaus, A., et al. (författare)
  • Synchronization effects between striatal fast-spiking interneurons forming networks with different topologies
  • 2008
  • Ingår i: Frontiers in Neuroinformatics. - : Frontiers Media SA. - 1662-5196.
  • Tidskriftsartikel (övrigt vetenskapligt/konstnärligt)abstract
    • The basal ganglia are involved in executive functions of the forebrain, such as the planning and selection of motor behavior. In the striatum, which is the input stage of the basal ganglia system, fast-spiking interneurons provide an effective feedforward inhibition to the medium-sized spiny projection neurons. Thus, these fast-spiking neurons are able to control the striatal output to later stages in the basal ganglia. Recently, in modeling studies it has been shown that pairs of cells as well as randomly connected networks of electrically coupled fast-spiking cells are able to synchronize their activity. Here we want to investigate the influence of network topology and network size on the synchronization in a simulated network of striatal fast-spiking interneurons. We use a biophysically detailed single-cell model of the fast-spiking interneuron with 127 compartments (Hellgren Kotaleski et al., J Neurophysiology, 95: 331-41, 2006; Hjorth et al., Neurocomputing 70: 1887–1891, 2007), and parallelize the network model of electrically coupled fast-spiking cells using PGENESIS running on a Blue Gene/L supercomputer. General network statistics and synaptic input is constrained by published data from the striatum. Network topology is varied from ’regular’ over ’small-world’ to ’random’ (Watts & Strogatz, Nature 393: 440–442, 1998). Using common statistical measures, we will determine the extent of local and global synchronization for each network topology. Furthermore, we investigate the interactions in the network by means of Ising models (Schneidman et al., Nature 440: 1007–1012, 2006). We are particularly interested in the relation between the ’interaction’ – as obtained by the Ising model – and the underlying network topology; e. g., do directly coupled fast-spiking interneuron pairs synchronize most?So far, the small amount of fast-spiking cells in the striatum (less than 2 %) makes experimental studies on the network level difficult or even impossible. With our study we hope to gain a better understanding of interaction effects in the feedforward inhibitory network of the striatum.
  •  
10.
  • Kunkel, Susanne, et al. (författare)
  • The NEST Dry-Run Mode : Efficient Dynamic Analysis of Neuronal Network Simulation Code
  • 2017
  • Ingår i: Frontiers in Neuroinformatics. - : Frontiers Media SA. - 1662-5196. ; 11
  • Tidskriftsartikel (refereegranskat)abstract
    • NEST is a simulator for spiking neuronal networks that commits to a general purpose approach: It allows for high flexibility in the design of network models, and its applications range from small-scale simulations on laptops to brain-scale simulations on supercomputers. Hence, developers need to test their code for various use cases and ensure that changes to code do not impair scalability. However, running a full set of benchmarks on a supercomputer takes up precious compute-time resources and can entail long queuing times. Here, we present the NEST dry-run mode, which enables comprehensive dynamic code analysis without requiring access to high-performance computing facilities. A dry-run simulation is carried out by a single process, which performs all simulation steps except communication as if it was part of a parallel environment with many processes. We show that measurements of memory usage and runtime of neuronal network simulations closely match the corresponding dry-run data. Furthermore, we demonstrate the successful application of the dry-run mode in the areas of profiling and performance modeling.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-10 av 17

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy