SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "LAR1:lu ;mspu:(conferencepaper);pers:(Spaanenburg Lambert);pers:(Slump C H)"

Sökning: LAR1:lu > Konferensbidrag > Spaanenburg Lambert > Slump C H

  • Resultat 1-9 av 9
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Spaanenburg, Lambert, et al. (författare)
  • Embedded sensory systems based on topographic maps
  • 2003
  • Ingår i: Proceedings Progress 03. - 9073461375 ; s. 232-237
  • Konferensbidrag (refereegranskat)abstract
    • Image processing is one of the popular applications of Cellular Neural Networks. Macro enriched field-programmable gate-arrays can be used to realize such systems on silicon. The paper discusses a pipelined implementation that supports the handling of gray-level images at 180 to 240 Mpixels per second by exploiting the Virtex-II macros to spatially unroll the local feedback.
  •  
2.
  • Spaanenburg, Lambert, et al. (författare)
  • Molding the knowledge in modular neural networks
  • 2002
  • Ingår i: Learning Solutions (SNN/STW workshop "Lerende Oplossingen" ). - STW Technology Foundation. ; s. 25-26
  • Konferensbidrag (refereegranskat)
  •  
3.
  • Spaanenburg, Lambert, et al. (författare)
  • Natural learning of neural networks by reconfiguration
  • 2003
  • Ingår i: SPIE Proceedings on Bioengineered and Bioinspired Systems. - SPIE. - 1996-756X .- 0277-786X. ; 5119, s. 273-284
  • Konferensbidrag (refereegranskat)abstract
    • The communicational and computational demands of neural networks are hard to satisfy in a digital technology. Temporal computing solves this problem by iteration, but leaves a slow network. Spatial computing was no option until the coming of modern FPGA devices. The letter shows how a small feed-forward neural module can be configured on the limited logic blocks between RAM and multiplier macro’s. It is then described how by spatial unrolling or by reconfiguration a large modular ANN can be built from such modules.
  •  
4.
  • Spaanenburg, Lambert, et al. (författare)
  • Preparing for knowledge extraction in modular neural networks
  • 2002
  • Ingår i: 3rd IEEE Signal Processing Symposium SPS'02. ; s. 121-124
  • Konferensbidrag (refereegranskat)abstract
    • Neural networks learn knowledge from data. For a monolithic structure, this knowledge can be easily used but not isolated. The many degrees of freedom while learning make knowledge extraction a computationally intensive process as the representation is not unique. Where existing knowledge is inserted to initialize the network for training, the effect becomes subsequently randomized within the solution space. The paper describes structuring techniques such as modularity and hierarchy to create a topology that provides a better view on the learned knowledge to support a later rule extraction.
  •  
5.
  • vanderZwaag, B J, et al. (författare)
  • Analysis of neural networks for edge detection
  • 2002
  • Ingår i: Proceedings of the ProRISC Workshop on Circuits, Systems and Signal Processing. - STW Technology Foundation. - 90-73461-33-2 ; s. 580-586
  • Konferensbidrag (refereegranskat)abstract
    • This paper illustrates a novel method to analyze artificial neural networks so as to gain insight into their internal functionality. To this purpose, the elements of a feedforward-backpropagation neural network, that has been trained to detect edges in images, are described in terms of differential operators of various orders and with various angles of operation.
  •  
6.
  • vanderZwaag, B J, et al. (författare)
  • Analysis of neural networks in terms of domain functions
  • 2002
  • Ingår i: 3rd IEEE Signal Processing Symposium SPS 02. ; s. 237-240
  • Konferensbidrag (refereegranskat)abstract
    • Despite their success-story, artificial neural networks have one major disadvantage compared to other techniques: the inability to explain comprehensively how a trained neural network reaches its output; neural networks are not only (incorrectly) seen as a “magic tool” but possibly even more as a mysterious “black box.” Although much research has already been done to “open the box,” there is a notable hiatus in known publications on analysis of neural networks. So far, mainly sensitivity analysis and rule extraction methods have been used to analyze neural networks. However, these can only be applied in a limited subset of the problem domains where neural network solutions are encountered. In this paper we propose a wider applicable method which, for a given problem domain, involves identifying basic functions with which users in that domain are already familiar, and describing trained neural networks, or parts thereof, in terms of those basic functions. This will provide a comprehensible description of the neural network’s function and, depending on the chosen base functions, it may also provide an insight into the neural network’s inner “reasoning.” It could further be used to optimize neural network systems. An analysis in terms of base functions may even make clear how to (re)construct a superior system using those base functions, thus using the neural network as a construction advisor.
  •  
7.
  • VanderZwaag, B J, et al. (författare)
  • Analysis of neural networks through base functions
  • 2002
  • Ingår i: SNN/STW workshop "Lerende Oplossingen".
  • Konferensbidrag (refereegranskat)abstract
    • Problem statement. Despite their success-story, neural networks have one major disadvantage compared to other techniques: the inability to explain comprehensively how a trained neural network reaches its output; neural networks are not only (incorrectly) seen as a “magic tool” but possibly even more as a mysterious “black box” [1]. This is an important aspect of the functionality of any technology, as users will be interested in “how it works” before trusting it completely. Although much research has already been done to “open the box,” there is a notable hiatus in known publications on analysis of neural networks. So far, mainly sensitivity analysis and rule extraction methods have been used to analyze neural networks. However, these can only be applied in a limited subset of the problem domains where neural network solutions are encountered. Research goal and approach. We therefore propose a method which, for a given problem domain, involves identifying basic functions with which users in that domain are already familiar, and describing trained neural networks, or parts thereof, in terms of those basic functions. This will provide a comprehensible description of the neural network’s function and, depending on the chosen basic functions, it may also provide an insight into the network’s inner “reasoning.” Relevance. Domain-specific analysis of neural networks through base functions will not only provide insight into the in- and external behavior of neural networks and show their possible limitations in particular applications, but it will also lower the acceptability threshold for future users unfamiliar with neural networks. Further, domain-specific neural network analysis methods that utilize domain-specific base functions can also be used to optimize neural network systems. An analysis in terms of base functions may even make clear how to (re)construct a superior system using those base functions, thus using the neural network merely as a construction advisor. If a user does not want to trust a neural network for any reason whatsoever, he may still trust a non-neural system that would have been nearly impossible to construct without using a neural network as an advisor. Initial results. As an example, the poster shows that an edge detector realized by a neural network can be analyzed in terms of differential filter operators, which are common in the digital image processing domain (for more details, see [2]). The same analysis was applied to some well-known image filters, enabling comparison of conventional edge detectors known from literature and the neural network edge detectors. The difference between our comparison and more commonly used methods for comparison lies herein that our comparison was based directly on the detectors’ filter operations rather than on their performance on a given (benchmark) example. The latter is a more indirect method of comparison and does not provide any insight into the neural network’s functionality.
  •  
8.
  • vanderZwaag, B J, et al. (författare)
  • Process Identification through modular neural networks and rule extraction
  • 2002
  • Ingår i: Proceedings of the Fourteenth Belgium/Netherlands Conference on Artificial Intelligence (BNAIC'02). - 1568-7805. ; s. 507-508
  • Konferensbidrag (refereegranskat)abstract
    • Monolithic neural networks may be trained from measured data to establish knowledge about the process. Unfortunately, this knowledge is not guaranteed to be found and – if at all – hard to extract. Modular neural networks are better suited for this purpose. Domain-ordered by topology, rule extraction is performed module by module. This has all the benefits of a divide-and-conquer method and opens the way to structured design. This paper discusses a next step in this direction by illustrating the potential of base functions to design the neural model.
  •  
9.
  • vanderZwaag, B J, et al. (författare)
  • Process Identification through modular neural networks and rule extraction
  • 2002
  • Ingår i: Proceedings FLINS 2002. ; s. 268-277
  • Konferensbidrag (refereegranskat)abstract
    • Monolithic neural networks may be trained from measured data to establish knowledge about the process. Unfortunately, this knowledge is not guaranteed to be found and – if at all – hard to extract. Modular neural networks are better suited for this purpose. Domain-ordered by topology, rule extraction is performed module by module. This has all the benefits of a divide-and-conquer method and opens the way to structured design. This paper discusses a next step in this direction by illustrating the potential of base functions to design the neural model
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-9 av 9
Typ av publikation
Typ av innehåll
refereegranskat (9)
Författare/redaktör
vanderZwaag, B J (8)
Alberts, R (1)
Lundgren, A. (1)
Malki, Suleyman, (1)
visa fler...
H, Slump, C (1)
Achterop, S (1)
Venema, R S (1)
visa färre...
Lärosäte
Språk
Engelska (9)
Forskningsämne (UKÄ/SCB)
Teknik (9)

År

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy