SwePub
Sök i SwePub databas

  Extended search

Träfflista för sökning "WFRF:(Sander Ingo) "

Search: WFRF:(Sander Ingo)

  • Result 51-100 of 125
Sort/group result
   
EnumerationReferenceCoverFind
51.
  • Jordao, Rodolfo, et al. (author)
  • Design space exploration for safe and optimal mapping of avionics functionality on IMA platforms
  • 2023
  • In: AIAA/IEEE Digital Avionics Systems Conference. - : Institute of Electrical and Electronics Engineers (IEEE).
  • Conference paper (peer-reviewed)abstract
    •     Future avionic systems will be increasingly automated. The size and complexity of the avionics functions in these systems will increase likewise. The degree of attainable automation directly depends on the avionics system's computing power and the efficiency of available tools that map the overall functionality onto the target heterogeneous platform architecture. In safety-critical scenarios, these automation tools must also provide safety guarantees that aid or drive the certification processes.    In line with this automation goal, We propose a novel design space exploration technique for the mapping functionality on IMA platforms.    The design space exploration technique returns mappings of the functionality onto the platform that are safe and increasingly resource-efficient.    A safe mapping is one where the functional and extra-functional requirements are met.    A resource-efficient mapping is one where fewer processing elements are used to achieve a safe mapping.    More importantly, the proposed technique can return computational proof that no safe mapping is likely possible. This proof is key for safety-critical contexts.    To demonstrate the suitability of our technique for avionics systems design scenarios, we investigate its use with an industrial avionics case based on the ones from the PANORAMA ITEA3 project. The case study includes two avionics functionalities,    one control functionality, and one streaming-like functionality. The platform is hierarchical and heterogeneous, with elements oriented for higher safety and elements oriented for higher performance.    The avionics case-study evaluation shows that our novel design space exploration technique's abstractions and assumptions adequately represent avionics design scenarios directly or through a systematic overestimation.    The technique is openly available within the design space exploration tool IDeSyDe. Therefore, designers can immediately benefit from the optimality and safety guarantees given by our novel design space exploration technique in their avionics design process.
  •  
52.
  • Jordao, Rodolfo, et al. (author)
  • Formulation of Design Space Exploration Problems by Composable Design Space Identification
  • 2021
  • In: PROCEEDINGS OF THE 2021 DESIGN, AUTOMATION & TEST IN EUROPE CONFERENCE & EXHIBITION (DATE 2021). - : Institute of Electrical and Electronics Engineers (IEEE). ; , s. 1204-1207
  • Conference paper (peer-reviewed)abstract
    • Design space exploration (DSE) is a key activity in embedded system design methodologies and can be supported by well-defined models of computation (MoCs) and predictable platform architectures. The original design model, covering the application models, platform models and design constraints needs to be converted into a form analyzable by computer-aided decision procedures such as mathematical programming or genetic algorithms. This conversion is the process of design space identification (DSI), which becomes very challenging if the design domain comprises several MoCs and platforms. For a systematic solution to this problem, separation of concerns between the design domain and decision domain is of key importance. We propose in this paper a systematic DSI scheme that is (a) composable, as it enables the stepwise and simultaneous extension of both design and decision domain, and (b) tuneable, because it also enables different DSE solving techniques given the same design model. We exemplify this DSI scheme by an illustrative example that demonstrates the mechanisms for composition and tuning. Additionally, we show how different compositions can lead to the same decision model as an important property of this DSI scheme.
  •  
53.
  • Kehoe, Laura, et al. (author)
  • Make EU trade with Brazil sustainable
  • 2019
  • In: Science. - : American Association for the Advancement of Science (AAAS). - 0036-8075 .- 1095-9203. ; 364:6438, s. 341-
  • Journal article (other academic/artistic)
  •  
54.
  • Khalilzad, Nima, 1986-, et al. (author)
  • A modular design space exploration framework for multiprocessor real-time systems
  • 2016
  • In: Forum on Specification and Design Languages. - : IEEE. - 9791092279177
  • Conference paper (peer-reviewed)abstract
    • Embedded system designers often face a large number of design alternatives when designing complex systems. A designer must select an alternative which satisfies application constraints (e.g. timing requirements) while optimizing system level objectives such as overall energy consumption. The size of design space is often very large giving rise to the need for systematic Design Space Exploration (DSE) methods. In this paper we address the DSE problem for real-time applications that belong to two different domains: (i) streaming applications modeled using the synchronous dataflow graphs; (ii) feedback control tasks modeled using the periodic task model. We consider a heterogeneous multiprocessor platform in which processors communicate through a predictable bus architecture. We present our DSE tool in which the DSE problem is modeled as a constraint satisfaction problem, and it is solved using a constraint programming solver. This approach provides a modular framework in which different constraints such as deadline, throughput and energy consumption can easily be plugged depending on the system being designed.
  •  
55.
  • Lenz, Alina, et al. (author)
  • SAFEPOWER project : Architecture for Safe and Power-Efficient Mixed-Criticality Systems
  • 2016
  • In: 19TH EUROMICRO CONFERENCE ON DIGITAL SYSTEM DESIGN (DSD 2016). - : IEEE. - 9781509028160 ; , s. 294-300
  • Conference paper (peer-reviewed)abstract
    • With the ever increasing industrial demand for bigger, faster and more efficient systems, a growing number of cores is integrated on a single chip. Additionally, their performance is further maximized by simultaneously executing as many processes as possible not regarding their criticality. Even safety critical domains like railway and avionics apply these paradigms under strict certification regulations. As the number of cores is continuously expanding, the importance of cost-effectiveness grows. One way to increase the cost-efficiency of such System on Chip (SoC) is to enhance the way the SoC handles its power resources. By increasing the power efficiency, the reliability of the SoC is raised, because the lifetime of the battery lengthens. Secondly, by having less energy consumed, the emitted heat is reduced in the SoC which translates into fewer cooling devices. Though energy efficiency has been thoroughly researched, there is no application of those power saving methods in safety critical domains yet. The EU project SAFEPOWER(1) targets this research gap and aims to introduce certifiable methods to improve the power efficiency of mixed-criticality real-time systems (MCRTES). This paper will introduce the requirements that a power efficient SoC has to meet and the challenges such a SoC has to overcome.
  •  
56.
  • Li, Shuo, et al. (author)
  • System level synthesis of hardware for DSP applications using pre-characterized function implementations
  • 2013
  • In: 2013 International Conference on Hardware/Software Codesign and System Synthesis (CODES+ISSS). - : IEEE. - 9781479914173
  • Conference paper (peer-reviewed)abstract
    • SYLVA is a system level synthesis framework that transforms DSP sub-systems modeled as synchronous data flow into hardware implementations in ASIC, FPGAs or CGRAs. SYLVA synthesizes in terms of pre-characterized function implementations (FTMPs). It explores the design space in three dimensions, number of FTMPs, type of FTMPs and pipeline parallelism between the producing and consuming FTMPs. We introduce timing and interface model of FTMPs to enable reuse and automatic generation of Global Interconnect and Control (GLIC) to glue the FTMPs together into a working system. SYLVA has been evaluated by applying it to five realistic DSP applications and results analyzed for design space exploration, efficacy in generating GLIC by comparing to manually generated GLIC and accuracy of design space exploration by comparing the area and energy costs considered during the design space exploration based on pre-characterized FIMPs and the final results.
  •  
57.
  • Loubach, Denis S., et al. (author)
  • Classification and Mapping of Model Elements for Designing Runtime Reconfigurable Systems
  • 2021
  • In: IEEE Access. - : Institute of Electrical and Electronics Engineers (IEEE). - 2169-3536. ; 9, s. 156337-156360
  • Journal article (peer-reviewed)abstract
    • Embedded systems are ubiquitous and control many critical functions in society. A fairly new type of embedded system has emerged with the advent of partial reconfiguration, i.e. runtime reconfigurable systems. They are attracting interest in many different applications. Such a system is capable of reconfiguring itself at the hardware level and without the need to halt the application's execution. While modeling and implementing these systems is far from a trivial task, there is currently a lack of systematic approaches to tackle this issue. In other words, there is no unanimously agreed upon modeling paradigm that can capture adaptive behaviors at the highest level of abstraction, especially when regarding the design entry, namely, the initial high-level application and platform models. Given this, our paper proposes two domain ontologies for application and virtual platform models used to derive a classification system and to provide a set of rules on how the different model elements are allowed to be composed together. The application behavior is captured through a formal model of computation which dictates the semantics of execution, concurrency, and synchronization. The main contribution of this paper is to combine suitable formal models of computation, a functional modeling language, and two domain ontologies to create a systematic design flow from an abstract executable application model into a virtual implementation model based on a runtime reconfigurable architecture (virtual platform model) using well-defined mapping rules. We demonstrate the applicability, generality, and potential of the proposed model element classification system and mapping rules by applying them to representative and complete examples: an encoder/decoder system and an avionics attitude estimation system. Both cases yield a virtual implementation model from an abstract application model.
  •  
58.
  •  
59.
  • Lu, Zhonghai, et al. (author)
  • Feasibility analysis of messages for on-chip networks using wormhole routing
  • 2005
  • In: PROCEEDINGS OF THE ASIA AND SOUTH PACIFIC DESIGN AUTOMATION CONFERENCE, VOLS 1 AND 2. - New York, New York, USA : IEEE conference proceedings. ; , s. 960-964
  • Conference paper (peer-reviewed)abstract
    • The feasibility of a message in a network concerns if its timing property can be satisfied without jeopardizing any messages already in the network to meet their timing properties. We present a novel feasibility analysis for real-time (RT) and non-realtime (NT) messages in wormhole-routed networks on chip. For RT messages, we formulate a contention tree that captures contentions in the network. For coexisting RT and NT messages, we propose a simple bandwidth partitioning method that allows us to analyze their feasibility independently.
  •  
60.
  •  
61.
  • Lu, Zhonghai, et al. (author)
  • Refining synchronous communication onto network-on-chip best-effort services
  • 2006
  • In: Applications of Specification and Design Languages for SoCs. - DORDRECHT : Springer. - 1402049978 ; , s. 23-38
  • Conference paper (peer-reviewed)abstract
    • We present a novel approach to refine a system model specified with perfectly synchronous communication onto a network-on-chip (NoC) best-effort communication service. It is a top-down procedure with three steps, namely, channel refinement, process refinement, and communication mapping. In channel refinement, synchronous channels are replaced with stochastic channels abstracting the best-effort service. In process refinement, processes are refined in terms of interfaces and synchronization properties. Particularly, we use synchronizers to maintain local synchronization of processes and thus achieve synchronization consistency, which is a key requirement while mapping a synchronous model onto an asynchronous architecture. Within communication mapping, the refined processes and channels are mapped to an NoC architecture. Adopting the Nostrum NoC platform as target architecture, we use a digital equalizer as a tutorial example to illustrate the feasibility of our concepts.
  •  
62.
  • Lu, Zhonghai, et al. (author)
  • Towards performance-oriented pattern-based refinement of synchronous models onto NoC communication
  • 2006
  • In: DSD 2006: 9th EUROMICRO Conference on Digital System Design: Architectures, Methods and Tools, Proceedings. - 0769526098 ; , s. 37-44
  • Conference paper (peer-reviewed)abstract
    • We present a performance-oriented refinement approach that refines a perfectly synchronous communication model onto Network-on-Chip (NoC) communication. We first identify four basic forms of NoC process interaction patterns at the process level, namely, producer-consumer, peers, client-server and multicast. We propose a three-step top-down refinement method: channel refinement, protocol refinement and channel mapping. For the producer-consumer pattern, we describe it in detail. In channel refinement, we deal with interfacing multiple clock domains and use a stochastic process to model channel delay and jitter In protocol refinement, we show how to refine communication towards application requirements such as reliability and throughput. In channel mapping, we discuss channel convergence and channel merge arising from channel overlapping. All the refinements have been conducted and validated as an integral design phase towards implementation in ForSyDe, a formal system-level design methodology based on a synchronous model of computation.
  •  
63.
  • Lu, Zhonghai, et al. (author)
  • Using synchronizers for refining synchronous communication onto Hardware/Software architectures
  • 2007
  • In: RSP 2007. - : IEEE Computer Society. - 9780769528342 ; , s. 143-149
  • Conference paper (peer-reviewed)abstract
    • We have presented a formal set of synchronization components called synchronizers for refining synchronous communication onto HW/SW codesign architectures. Such an architecture imposes asynchronous communication between HW-HW SW-SW and HW-SW components. The synchronizers enable local synchronization, thus satisfy the synchronization requirement of a typical IP core. In this paper we present their implementations in HW, SW and HW/SW as well as their application. To validate our concepts, we conduct a case study on a Nios FPGA that comprises a processor memory and custom logic. The final HW/SW implementation achieves equivalent performance to pure HW implementation. Our prototyping experience suggests that the synchronizers can be standardized as library modules and effectively separate the design of computation from that of communication.
  •  
64.
  • Minhass, Wajid Hassan, et al. (author)
  • Design and implementation of a plesiochronous multi-core 4x4 network-on-chip FPGA platform with MPI HAL support
  • 2009
  • In: 6th FPGAworld Conference, Academic Proceedings 2009. - New York, NY, USA : ACM. - 9781605588797 ; , s. 52-57
  • Conference paper (peer-reviewed)abstract
    • The Multi-Core NoC is a 4 by 4 Mesh NoC targeted for Altera FPGAs. It implements a deflective routing policy and is used to connect sixteen NIOS II processors. Each NIOS II is connected to the NoC via an address-mapped Resource Network Interface. The Multi-Core NoC is implemented on four separate Altera Stratix II FPGA boards, each hosting a Quad-Core NoC, which operates on a local 50 MHz clock. It has an onboard throughput of 650 Mbps (12.5 MFlit/s), and uses 28% of the LUs, 18% of the ALUTs, 22 % of the dedicated registers and 31% of the total memory blocks of a Stratix II FPGA. Asynchronous clock bridges, with a throughput of 50 Mbps (∼1MFlit/s), are used for the inter-board communication. Application programs use an MPI compatible Hardware Abstraction Layer (HAL) to communicate with the Resource Network Interface of the NoC. The RNI sets up message transfer, with a maximum length of 512 bytes, and sends flits with the size of 32 bit data plus 20 bit headers through the network. The MPI is the bottleneck of the system; it takes 46 us (43.4 kPackets/s) to send a minimum-sized packet through the protocol stack to a near neighbour and bounce it back to the original application. The bounce-back time for a far neighbour is 56 us.
  •  
65.
  • Minhass, Wajid Hassan, et al. (author)
  • Implementation of a scalable, globally plesiochronous locally synchronous, off-chip NoC communication protocol
  • 2009
  • In: 2009 NORCHIP. - 9781424443109 ; , s. 1-5
  • Conference paper (peer-reviewed)abstract
    • Multiprocessor system-on-chip design (MPSoC) is becoming a regular feature of the embedded systems. Shared-bus systems hold many advantages, but they do not scale. Network on chip (NoC) offers a promising solution to the scalability problem by enhancing the topology design. However, standard NoCs are only scalable within a chip. To be able to build infinitely scalable structures, a method to enhance the NoC-grid off-chip is needed. In this paper, we present such a method. As a proof of concept, the protocol is implemented on a 4 by 4 Mesh NoC, with NIOS II CPU cores as nodes, partitioned across four separate Altera FPGA boards, each board hosting a Quad-Core (2x2) NoC, operating on a local 50 MHz clock. The inter-chip communication protocol uses asynchronous clock bridges, with a throughput of 50 Mbps (~1MFlit/s) and is completely scalable. The NoC has an onboard throughput of 650 Mbps (12.5 MFlit/s). Each Quad-Core uses 28% of the LUs, 18% of the ALUTs, 22 % of the dedicated registers and 31% of the total memory blocks of the Stratix II FPGAs. Application programs use an MPI compatible Hardware Abstraction Layer (HAL) to communicate with each other over the NoC.
  •  
66.
  •  
67.
  • Navas, Byron, et al. (author)
  • Camera and LCM IP-Cores for NIOS SOPC System
  • 2009
  • In: 6th FPGAworld Conference, Academic Proceedings 2009. - New York : ACM. - 9781605588797 ; , s. 18-23
  • Conference paper (peer-reviewed)abstract
    • This paper presents the development of IP-Cores to integrate the Terasic DC2 Camera and LCM (LCD Module) daughter boards into an Altera Nios System, so that the image can be further processed by embedded software or custom hardware instructions. Among other challenges overcome during this work are clock-domain crossing, synchronizing FIFO design, variable and pipelined burst control, multi-masters contention for system memory and image frame buffer switching. In addition, we designed software device drivers, and API functions intended for graphics, image processing and video control; which are part of the IP deliverables. In a brief, this work describes some concepts and methodologies involved in the creation of IP-Cores for an Altera SOPC; it also presents the results of the designed CAM-IP and LCM-IP Cores working in an application demo, which constitutes a real solution and a reference design.
  •  
68.
  • Navas, Byron, 1969- (author)
  • Cognitive and Self-Adaptive SoCs with Self-Healing Run-Time-Reconfigurable RecoBlocks
  • 2015
  • Doctoral thesis (other academic/artistic)abstract
    • In contrast to classical Field-Programmable Gate Arrays (FPGAs), partial and run-time reconfigurable (RTR) FPGAs can selectively reconfigure partitions of its hardware almost immediately while it is still powered and operative. In this way, RTR FPGAs combine the flexibility of software with the high efficiency of hardware. However, their potential cannot be fully exploited due to the increased complexity of the design process, and the intricacy to generate partial reconfigurations. FPGAs are often seen as a single auxiliary area to accelerate algorithms for specific problems. However, when several RTR partitions are implemented and combined with a processor system, new opportunities and challenges appear due to the creation of a heterogeneous RTR embedded system-on-chip (SoC).The aim of this thesis is to investigate how the flexibility, reusability, and productivity in the design process of partial and RTR embedded SoCs can be improved to enable research and development of novel applications in areas such as hardware acceleration, dynamic fault-tolerance, self-healing, self-awareness, and self-adaptation. To address this question, this thesis proposes a solution based on modular reconfigurable IP-cores and design-and-reuse principles to reduce the design complexity and maximize the productivity of such FPGA-based SoCs. The research presented in this thesis found inspiration in several related topics and sciences such as reconfigurable computing, dependability and fault-tolerance, complex adaptive systems, bio-inspired hardware, organic and autonomic computing, psychology, and machine learning.The outcome of this thesis demonstrates that the proposed solution addressed the research question and enabled investigation in initially unexpected fields. The particular contributions of this thesis are: (1) the RecoBlock SoC concept and platform with its flexible and reusable array of RTR IP-cores, (2) a simplified method to transform complex algorithms modeled in Matlab into relocatable partial reconfigurations adapted to an improved RecoBlock IP-core architecture, (3) the self-healing RTR fault-tolerant (FT) schemes, especially the Upset-Fault-Observer (UFO) that reuse available RTR IP-cores to self-assemble hardware redundancy during runtime, (4) the concept of Cognitive Reconfigurable Hardware (CRH) that defines a development path to achieve self-adaptation and cognitive development, (5) an adaptive self-aware and fault-tolerant RTR SoC that learns to adapt the RTR FT schemes to performance goals under uncertainty using rule-based decision making, (6) a method based on online and model-free reinforcement learning that uses a Q-algorithm to self-optimize the activation of dynamic FT schemes in performance-aware RecoBlock SoCs.The vision of this thesis proposes a new class of self-adaptive and cognitive hardware systems consisting of arrays of modular RTR IP-cores. Such a system becomes self-aware of its internal performance and learns to self-optimize the decisions that trigger the adequate self-organization of these RTR cores, i.e., to create dynamic hardware redundancy and self-healing, particularly while working in uncertain environments.
  •  
69.
  • Navas, Byron, et al. (author)
  • On providing scalable self-healing adaptive fault-tolerance to RTR SoCs
  • 2014
  • In: Proceedings of ReConFigurable Computing and FPGAs (ReConFig), 2014 International Conference on. - 9781479959440 ; , s. 1-6
  • Conference paper (peer-reviewed)abstract
    • The dependability of heterogeneous many-core FPGA based systems are threatened by higher failure rates caused by disruptive scales of integration, increased design complexity, and radiation sensitivity. Triple-modular redundancy (TMR) and run-time reconfiguration (RTR) are traditional fault-tolerant (FT) techniques used to increase dependability. However, hardware redundancy is expensive and most approaches have poor scalability, flexibility, and programmability. Therefore, innovative solutions are needed to reduce the redundancy cost but still preserve acceptable levels of dependability. In this context, this paper presents the implementation of a self-healing adaptive fault-tolerant SoC that reuses RTR IP-cores in order to self-assemble different TMR schemes during run-time. The presented system demonstrates the feasibility of the Upset-Fault-Observer concept, which provides a run-time self-test and recovery strategy that delivers fault-tolerance over functions accelerated in RTR cores, at the same time reducing the redundancy scalability cost by running periodic reconfigurable TMR scan-cycles. In addition, this paper experimentally evaluates the trade-off of the implemented reconfigurable TMR schemes by characterizing important fault tolerant metrics i.e., recovery time (self-repair and self-replicate), detection latency, self-assembly latency, throughput reduction, and increase of physical resources.
  •  
70.
  • Navas, Byron, et al. (author)
  • Reinforcement Learning Based Self-Optimization of Dynamic Fault-Tolerant Schemes in Performance-Aware RecoBlock SoCs
  • 2015
  • Reports (other academic/artistic)abstract
    • Partial and run-time reconfiguration (RTR) technology has increased the range of opportunities and applications in the design of systems-on-chip (SoCs) based on Field-Programmable Gate Arrays (FPGAs). Nevertheless, RTR adds another complexity to the design process, particularly when embedded FPGAs have to deal with power and performance constraints uncertain environments. Embedded systems will need to make autonomous decisions, develop cognitive properties such as self-awareness and finally become self-adaptive to be deployed in the real world. Classico-line modeling and programming methods are inadequate to cope with unpredictable environments. Reinforcement learning (RL) methods have been successfully explored to solve these complex optimization problems mainly in workstation computers, yet they are rarely implemented in embedded systems. Disruptive integration technologies reaching atomic-scales will increase the probability of fabrication errors and the sensitivity to electromagnetic radiation that can generate single-event upsets (SEUs) in the configuration memory of FPGAs. Dynamic FT schemes are promising RTR hardware redundancy structures that improve dependability, but on the other hand, they increase memory system traffic. This article presents an FPGA-based SoC that is self-aware of its monitored hardware and utilizes an online RL method to self-optimize the decisions that maintain the desired system performance, particularly when triggering hardware acceleration and dynamic FT schemes on RTR IP-cores. Moreover, this article describes the main features of the RecoBlock SoC concept, overviews the RL theory, shows the Q-learning algorithm adapted for the dynamic fault-tolerance optimization problem, and presents its simulation in Matlab. Based on this investigation, the Q-learning algorithm will be implemented and verified in the RecoBlock SoC platform.
  •  
71.
  • Navas, Byron, et al. (author)
  • The RecoBlock SoC Platform : A Flexible Array of Reusable Run-Time-Reconfigurable IP-Blocks
  • 2013
  • In: Design, Automation & Test in Europe Conference & Exhibition (DATE), 2013. - 9781467350716 ; , s. 833-838
  • Conference paper (peer-reviewed)abstract
    • Run-time reconfigurable (RTR) FPGAs combine the flexibility of software with the high efficiency of hardware. Still, their potential cannot be fully exploited due to increased complexity of the design process. Consequently, to enable an efficient design flow, we devise a set of prerequisites to increase the flexibility and reusability of current FPGA-based RTR architectures. We apply these principles to design and implement the RecoBlock SoC platform, which main characterization is (1) a RTR plug-and-play IP-Core whose functionality is configured at run-time; (2) flexible inter-block communication configured via software, and (3) built-in buffers to support data-driven streams and inter-process communications. We illustrate the potential of our platform by a tutorial case study using an adaptive streaming application to investigate different combinations of reconfigurable arrays and schedules. The experiments underline the benefits of the platform and shows resource utilization.
  •  
72.
  • Navas, Byron, et al. (author)
  • The Upset-Fault-Observer : A Concept for Self-healing Adaptive Fault Tolerance
  • 2014
  • In: Proceedings of the 2014 NASA/ESA Conference on Adaptive Hardware and Systems, AHS 2014. - : IEEE Computer Society. - 9781479953561 ; , s. 89-96
  • Conference paper (peer-reviewed)abstract
    • Advancing integration reaching atomic-scales makes components highly defective and unstable during lifetime. This demands paradigm shifts in electronic systems design. FPGAs are particularly sensitive to cosmic and other kinds of radiations that produce single-event-upsets (SEU) in configuration and internal memories. Typical fault-tolerance (FT) techniques combine triple-modular-redundancy (TMR) schemes with run-time-reconfiguration (RTR). However, even the most successful approaches disregard the low suitability of fine-grain redundancy in nano-scale design, poor scalability and programmability of application specific architectures, small performance-consumption ratio of board-level designs, or scarce optimization capability of rigid redundancy structures. In that context, we introduce an innovative solution that exploits the flexibility, reusability, and scalability of a modular RTR SoC approach and reuse existing RTR IP-cores in order to assemble different TMR schemes during run-time. Thus, the system can adaptively trigger the adequate self-healing strategy according to execution environment metrics and user-defined goals. Specifically the paper presents: (a) the upset-fault-observer (UFO), an innovative run-time self-test and recovery strategy that delivers FT on request over several function cores but saves the redundancy scalability cost by running periodic reconfigurable TMR scan-cycles, (b) run-time reconfigurable TMR schemes and self-repair mechanisms, and (c) an adaptive software organization model to manage the proposed FT strategies.
  •  
73.
  • Navas, Byron, et al. (author)
  • Towards cognitive reconfigurable hardware : Self-aware learning in RTR fault-tolerant SoCs
  • 2015
  • In: Reconfigurable Communication-centric Systems-on-Chip (ReCoSoC), 2015. - : Institute of Electrical and Electronics Engineers (IEEE). - 9781467379427
  • Conference paper (peer-reviewed)abstract
    • Traditional embedded systems are evolving into power-and-performance-domain self-aware intelligent systems in order to overcome complexity and uncertainty. Without human control, they need to keep operative states in applications such as drone-based delivery or robotic space landing. Nowadays, the partial and run-time reconfiguration (RTR) of FPGA-based Systems-on-chip (SoC) can enable dynamic hardware acceleration or self-healing structures, but this conversely increases system-memory traffic. This paper introduces the basis of cognitive reconfigurable hardware and presents the design of an FPGA-based RTR SoC that becomes conscious of its monitored hardware and learns to make decisions that maintain a desired system performance, particularly when triggering hardware acceleration and dynamic fault-tolerant (FT) schemes on RTR cores. Self-awareness is achieved by evaluating monitored metrics in critical AXI-cores, supported by hardware performance counters. We suggest a reinforcement-learning algorithm that helps the system to search out when and which reconfigurable FT-scheme can be triggered. Executing random sequences of an embedded benchmark suite simulates unpredictability and bus traffic. The evaluation shows the effectiveness and implications of our approach.
  •  
74.
  • Navas, Byron, et al. (author)
  • Towards the generic reconfigurable accelerator : Algorithm development, core design, and performance analysis
  • 2013
  • Conference paper (peer-reviewed)abstract
    • Adoption of reconfigurable computing is limited in part by the lack of simplified, economic, and reusable solutions. The significant speedup and energy saving can increase performance but also design complexity; in particular for heterogeneous SoCs blending several CPUs, GPUs, and FPGA-Accelerator Cores. On the other hand, implementing complex algorithms in hardware requires modeling and verification, not only HDL generation. Most approaches are too specific without looking for reusability. Therefore, we present a solution based on: (1) a design methodology to develop algorithms accelerated in reconfigurable/non-reconfigurable IP-Cores, using common access tools, and contemplating verification from model to embedded software stages; (2) a generic accelerator core design that enables relocation and reuse almost independently of the algorithm, and data-flow driven execution models; and (3) a performance analysis of the acceleration mechanisms included in our system (i.e., accelerator core, burst I/O transfers, and reconfiguration pre-fetch). In consequence, the implemented system accelerates algorithms (e.g., FIR and Kalman filters) with speedups up to 3 orders of magnitude, compared to processor implementations.
  •  
75.
  • Ngo, Kalle (author)
  • Side-Channel Analysis of Post-Quantum Cryptographic Algorithms
  • 2023
  • Doctoral thesis (other academic/artistic)abstract
    • Public key cryptographic schemes used today rely on the intractability of certain mathematical problems that are known to be efficiently solvable with a large-scale quantum computer. To address the need for long-term security, in 2016 NIST started a project for standardizing post-quantum cryptography (PQC) primitives that rely on problems not known to be targets for a quantum computer, such as lattice problems. However, algorithms that are secure from the point of view of traditional cryptanalysis can be susceptible to side-channel attacks. Therefore, NIST put a major emphasis on evaluating the resistance of candidate algorithms to side-channel attacks.This thesis focuses on investigating the susceptibility of two NIST PQC candidates, Saber and CRYSTALS-Kyber Key Encapsulation Mechanisms (KEMs), to side-channel attacks. We present a collection of nine papers, of which eight focus on side-channel analysis of Saber and CRYSTALS-Kyber, and one demonstrates a passive side-channel attack on a hardware random number generator (RNG) integrated in STM32 MCUs.In the first three papers, we demonstrate attacks on higher-order masked software implementations of Saber and CRYSTALS-Kyber. One of the main contributions is a single-step deep learning message recovery method capable of recovering secrets from a masked implementation directly, without explicitly extracting the random masks. Another main contribution is a new neural network training method called recursive learning, which enables the training of neural networks capable of recovering a message bit with a probability higher than 99% from higher-order masked implementations.In the next two papers, we show that even software implementations of Saber and CRYSTALS-Kyber protected by both first-order masking and shuffling can be compromised. We present two methods for message recovery: Hamming weight-based and Fisher-Yates (FY) index-based. Both approaches are successful in recovering secret keys, with the latter using considerably fewer traces. In addition, we extend the ECC-based secret key recovery method presented in the prior chapter to ECCs with larger code distances.In the last two papers, we consider a different type of side channel amplitude-modulated electromagnetic (EM) emanations. We show that information leaked from implementations of Saber and CRYSTALS-Kyber through amplitude-modulated EM side channels can be used to recover the session and secret keys. The main contribution is a multi-bit error-injection method that allows us to exploit byte-level leakage. We demonstrate the success of our method on an nRF52832 system-on-chip supporting Bluetooth 5 and a hardware implementation of CRYSTALS-Kyber in a Xilinx Artix-7 FPGA.Finally, we present a passive side-channel attack on a hardware TRNG in a commercial integrated circuit in our last paper. We demonstrate that it is possible to train a neural network capable of recovering the Hamming weight of random numbers generated by the RNG from power traces with a higher than 60% probability. We also present a new method for mitigating device inter-variability based on iterative re-training.Overall, our research highlights the importance of evaluating the resistance of candidate PQC algorithm implementations to side-channel attacks and demonstrates the susceptibility of current implementations to various types of side channel analysis. Our findings are expected to provide valuable insights into the design of future PQC algorithms that are resistant to side-channel analysis.
  •  
76.
  • Paone, E., et al. (author)
  • Customization of OpenCL applications for efficient task mapping under heterogeneous platform constraints
  • 2015
  • In: Proceedings -Design, Automation and Test in Europe, DATE. - New Jersey : IEEE conference proceedings. - 9783981537048 ; , s. 736-741
  • Conference paper (peer-reviewed)abstract
    • When targeting an OpenCL application to platforms with multiple heterogeneous accelerators, task tuning and mapping have to cope with device-specific constraints. To address this problem, we present an innovative design flow for the customization and performance optimization of OpenCL applications on heterogeneous parallel platforms. It consists of two phases: 1) a tuning phase that optimizes each application kernel for a given platform and 2) a task-mapping phase that maximizes the overall application throughput by exploiting concurrency in the application task graph. The tuning phase is suitable for customizing parameterized OpenCL kernels considering device-specific constraints. Then, the mapping phase improves task-level parallelism for multi-device execution accounting for the overhead of memory transfers - overheads implied by multiple OpenCL contexts for different device vendors. Benefits of the proposed design flow have been assessed on a stereo-matching application targeting two commercial heterogeneous platforms.
  •  
77.
  •  
78.
  •  
79.
  •  
80.
  • Raudvere, Tarvo, et al. (author)
  • A Synchronization Algorithm for Local Temporal Refinements in Perfectly Synchronous Models with Nested Feedback Loops
  • 2007
  • In: GLSVLSI'07. - NEW YORK : ASSOC COMPUTING MACHINERY. - 9781595936059 ; , s. 353-358
  • Conference paper (peer-reviewed)abstract
    • Due to the abstract and simple computation and communication mechanism in the synchronous computational model it is easy to simulate synchronous systems and to apply formal verification methods. In synchronous models, a local temporal refinement that increases the delay in a single computation block may affect the functionality of the entire model. To preserve the system's functionality after temporal refinements we provide a synchronization algorithm that applies also to models with nested feedback loops. The algorithm adds pure delay elements to the model in order to balance the delay caused by refinement and to assure concurrent data arrival at computation blocks. It is done so that the refined model stays latency equivalent to the original model. The advantages of our approach are that (a) we remain fully within the synchronous model of computation, (b) we preserve the functionality of the existing computation blocks, and (c) we do not require additional computation resources, wrapper circuits or schedulers.
  •  
81.
  • Raudvere, Tarvo, et al. (author)
  • Application and. verification of local nonsemantic-preserving transformations in system-design
  • 2008
  • In: IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems. - : Institute of Electrical and Electronics Engineers (IEEE). - 0278-0070 .- 1937-4151. ; 27:6, s. 1091-1103
  • Journal article (peer-reviewed)abstract
    • Due to the increasing abstraction gap between the initial system model and a final implementation, the verification of the respective models against each other is a formidable task. This paper addresses the verification problem by proposing a stepwise application of combined refinement and verification activities in the context of synchronous model of computation. An implementation model is developed from the system model by applying pre-defined design transformations which are as follows: 1) semantic preserving or 2) nonsemantic preserving. Nonsemantic-preserving transformations introduce lower level implementation details, which are necessary to yield an efficient implementation. Our approach divides the verification tasks into two activities: 1) the local correctness of a refined block is checked by using formal verification tools and predefined properties, which are developed for each nonsemantic-preserving transformation, and 2) the global influence of the refinement to the entire system is studied through static analysis. We illustrate the design refinement and verification approach with three transformations: 1) a communication refinement mapping a synchronous channel to an asynchronous one including a handshake mechanism; 2) a computation refinement, which introduces resource sharing in a combinational computation block; and 3) a synchronization demanding refinement, where an algorithm analyzes the influence of a local refinement to the temporal properties of the entire system and restores the system's correct temporal behavior if necessary.
  •  
82.
  • Raudvere, Tarvo, et al. (author)
  • Polynomial abstraction for verification of sequentially implemented combinational circuits
  • 2004
  • In: DESIGN, AUTOMATION AND TEST IN EUROPE CONFERENCE AND EXHIBITION, VOLS 1 AND 2, PROCEEDINGS. - LOS ALAMITOS : IEEE COMPUTER SOC. - 0769520855 ; , s. 690-691
  • Conference paper (peer-reviewed)abstract
    • Todays integrated circuits with increasing complexity cause the well known state space explosion problem in verification tools. In order to handle this problem a much simpler abstract model of the design has to be created for verification. We introduce the polynomial abstraction technique, which efficiently simplifies the verification task of sequential design blocks whose functionality can be expressed as a polynomial. Through our technique, the domains of possible values of data input signals can be reduced. This is done in such a way that the abstract model is still valid for model checking of the design functionality in terms of the system's control and data properties. We incorporate polynomial abstraction into the ForSyDe methodology, for the verification of clock domain design refinements.
  •  
83.
  • Raudvere, Tarvo, et al. (author)
  • Synchronization after design refinements with sensitive delay elements
  • 2007
  • In: Proceedings of the International Conference on HW/SW Codesign and System Synthesis. - New York, NY, USA : ACM. - 9781595938244
  • Conference paper (peer-reviewed)abstract
    • The synchronous computational model with its simple computation and communication mechanism makes it easy to describe, simulate and formally verify synchronous embedded systems at a high level of abstraction. In synchronous models, a local refinement increasing the delay in a single computation block may affect the functionality of the entire model. We provide a synchronization algorithm that preserves the system's functionality after design refinements, by using additional synchronization delays and making some delays sensitive to their input values. The refined and synchronized model stays latency equivalent to the original model. The advantages of our approach are the following: (a) we remain fully within the synchronous model of computation, (b) we preserve the functionality of the existing computation blocks, and (c) we do not require additional computation resources, specific communication protocols, wrapper circuits around computation blocks or schedulers.
  •  
84.
  • Raudvere, Tarvo, et al. (author)
  • System level verification of digital signal processing applications based on the polynomial abstraction technique
  • 2005
  • In: ICCAD-2005. - : IEEE. - 078039254X ; , s. 285-290
  • Conference paper (peer-reviewed)abstract
    • Polynomial abstraction has been developed for data abstraction of sequential circuits, where the functionalily can be expressed as polynomials. The method, based on the fundamental theorem of algebra, abstracts a possibly infinite domain of input values, into a much smaller and finite one, whose size is calculated according to the degree of the respective polynomial. The abstract model preserves the system's control and data properties, which can be verified by model checking. Experiments show that our approach does not only allow an automatic verification, but also gives considerably better results than existing methods.
  •  
85.
  • Raudvere, Tarvo, et al. (author)
  • The ForSyDe semantics
  • 2002
  • In: Proceedings of Swedish System-on-Chip Conference.
  • Conference paper (peer-reviewed)
  •  
86.
  •  
87.
  • Rosvall, Kathrin, et al. (author)
  • A constraint-based design space exploration framework for real-time applications on MPSoCs
  • 2014
  • In: Proceedings -Design, Automation and Test in Europe, DATE 2014. - : IEEE Computer Society. - 9783981537024 ; , s. 1-6
  • Conference paper (peer-reviewed)abstract
    • Design space exploration (DSE) is a critical step in the design process of real-time multiprocessor systems. Combining a formal base in form of SDF graphs with predictable platforms providing guaranteed QoS, the paper proposes a flexible and extendable DSE framework that can provide performance guarantees for multiple applications implemented on a shared platform. The DSE framework is formulated in a declarative style as interprocess communication-aware constraint programming (CP) model. Apart from mapping and scheduling of application graphs, the model supports design constraints on several cost and performance metrics, as e.g. memory consumption and achievable throughput. Using constraints with different compliance level, the framework introduces support for mixed criticality in the CP model. The potential of the approach is demonstrated by means of experiments using a Sobel filter, a SUSAN filter, a RASTA-PLP application and a JPEG encoder.
  •  
88.
  • Rosvall, Kathrin, et al. (author)
  • Exploring Power and Throughput for Dataflow Applications on Predictable NoC Multiprocessors
  • 2018
  • Conference paper (peer-reviewed)abstract
    • System level optimization for multiple mixed-criticality applications on shared networked multiprocessor platforms is extremely challenging. Substantial complexity arises from the interdependence between the multiple subproblems of mapping, scheduling and platform configuration under the consideration of several, potentially orthogonal, performance metrics and constraints. Instead of using heuristic algorithms and problem decomposition, novel unified design space exploration (DSE) approaches based on Constraint Programming (CP) have in the recent years shown promising results. The work in this paper takes advantage of the modularity of CP models, in order to support heterogeneous multiprocessor Network-on-Chip (NoC) with Temporally Disjoint Networks (TDNs) aware message injection. The DSE supports a range of design criteria, in particular the optimization and satisfaction of power and throughput. In addition, the DSE now provides a valid configuration for the TDNs that guarantees the performance required to fulfil the design goals. The experiments show the capability of the approach to find low-power and high-throughput designs, and validate a resulting design on a physical TDN-based NoC implementation.
  •  
89.
  • Rosvall, Kathrin, et al. (author)
  • Flexible and Tradeoff-Aware Constraint-Based Design Space Exploration for Streaming Applications on Heterogeneous Platforms
  • 2018
  • In: ACM Transactions on Design Automation of Electronic Systems. - : Association for Computing Machinery (ACM). - 1084-4309 .- 1557-7309. ; 23:2
  • Journal article (peer-reviewed)abstract
    • Due to its complexity, the problem of mapping and scheduling streaming applications on heterogeneous MPSoCs under real-time and performance constraints has traditionally been tackled by incomplete heuristic algorithms. In recent years, approaches based on Constraint Programming (CP) have shown promising results as complete methods for finding optimal mappings, in particular concerning throughput. However, so far none of the available CP approaches consider the tradeoff between throughput and buffer requirements or throughput and power consumption. This article integrates tradeoff awareness into the CP model and introduces a two-step solving approach that utilizes the advantages of heuristics, while still keeping the completeness property of CP. With a number of experiments considering several streaming applications and different platform models, the article illustrates not only the efficiency of the presented model but also its suitability for solving different problems with various combinations of performance constraints.
  •  
90.
  • Rosvall, Kathrin, et al. (author)
  • Throughput propagation in constraint-based design space exploration for mixed-criticality systems
  • 2017
  • In: ACM International Conference Proceeding Series. - New York, NY, USA : Association for Computing Machinery (ACM). - 9781450348409
  • Conference paper (peer-reviewed)abstract
    • When designing complex mixed-critical systems on multiprocessor platforms, a huge number of design alternatives has to be evaluated. Therefore, there is a need for tools which systematically find and analyze the ample alternatives and identify solutions that satisfy the design constraints. The recently proposed design space exploration (DSE) tool DeSyDe uses constraint programming (CP) to find implementations with performance guarantees for multiple applications with potentially mixed-critical design constraints on a shared platform. A key component of the DeSyDe tool is its throughput analysis component, called a throughput propagator in the context of CP. The throughput propagator guides the exploration by evaluating each design decision and is therefore executed excessively throughout the exploration. This paper presents two throughput propagators based on different analysis methods for DeSyDe. Their performance is evaluated in a range of experiments with six different application graphs, heterogeneous platform models and mixed-critical design constraints. The results suggest that the MCR throughput propagator is more efficient.
  •  
91.
  • Sander, Ingo, et al. (author)
  • Development and application of design transformations in ForSyDe
  • 2003
  • In: IEE Proceedings - Computers and digital Techniques. - : Institution of Engineering and Technology (IET). - 1350-2387 .- 1359-7027. ; 150:5, s. 313-320
  • Journal article (peer-reviewed)abstract
    • The formal system design (ForSyDe) methodology has been developed for system level design. Starting with a formal specification model, which captures the functionality of the system at a high level of abstraction, it provides formal design transformation methods for a transparent refinement process of the specification model into an implementation model which is optimised for synthesis. The formal treatment of transformational design refinement is the central contribution of this article. Using the formal semantics of ForSyDe processes we introduce the term characteristic function to be able to define and classify transformations as either semantic preserving or design decision. We also illustrate how we can incorporate classical synthesis techniques that have traditionally been used with control/data-flow graphs as ForSyDe transformations. This approach avoids discontinuities as it moves design refinement into the domain of the specification model.
  •  
92.
  •  
93.
  • Sander, Ingo, 1964-, et al. (author)
  • ForSyDe : System design using a functional language and models of computation
  • 2017
  • In: Handbook of Hardware/Software Codesign. - Dordrecht : Springer Netherlands. - 9789401772679 - 9789401772662 ; , s. 99-140
  • Book chapter (other academic/artistic)abstract
    • The ForSyDe methodology aims to push system design to a higher level of abstraction by combining the functional programming paradigm with the theory of Models of Computation (MoCs). A key concept of ForSyDe is the use of higher-order functions as process constructors to create processes. This leads to well-defined and well-structured ForSyDe models and gives a solid base for formal analysis. The book chapter introduces the basic concepts of the ForSyDe modeling framework and presents libraries for several MoCs and MoC interfaces for the modeling of heterogeneous systems, including support for the modeling of run-time reconfigurable processes. The formal nature of ForSyDe enables transformational design refinement using both semantic-preserving and nonsemantic-preserving design transformations. The chapter also introduces a general synthesis concept based on process constructors, which is exemplified by means of a hardware synthesis tool for synchronous ForSyDe models. Most examples in the chapter are modeled with the Haskell version of ForSyDe. However, to illustrate that ForSyDe is languageindependent, the chapter also contains a short overview of SystemC-ForSyDe.
  •  
94.
  •  
95.
  • Sander, Ingo, et al. (author)
  • High-Level Estimation and Trade-Off Analysis for Adaptive Real-Time Systems
  • 2009
  • In: 2009 IEEE INTERNATIONAL SYMPOSIUM ON PARALLEL & DISTRIBUTED PROCESSING. - 9781424437511 ; , s. 2985-2988
  • Conference paper (peer-reviewed)abstract
    • We propose a novel design estimation method for adaptive streaming applications to be implemented on a partially reconfigurable FPGA. Based on experimental results we enable accurate design cost estimates at an early design stage. Given the size and computation time of a set of configurations, which can be derived through logic synthesis, our method gives estimates for configuration parameters, such as bitstream sizes, computation mid reconfiguration times. To fulfil the system's throughput requirements, the required FIFO buffer sizes are then calculated using a hybrid analysis approach based on integer linear programming and simulation. Finally, we are able to calculate the total design cost as the sum of the costs for the FPGA area, the required configuration memory and the FIFO buffers. We demonstrate our method by analysing non-obvious trade-offs for a static and dynamic implementation of adaptivity.
  •  
96.
  • Sander, Ingo, et al. (author)
  • Modelling Adaptive Systems in ForSyDe
  • 2008
  • In: Electronic Notes in Theoretical Computer Science. - : Elsevier BV. - 1571-0661. ; 200:2, s. 39-54
  • Journal article (peer-reviewed)abstract
    • Emerging architectures such as partially reconfigurable FPGAs provide a huge potential for adaptivity in the area of embedded systems. Since many system functions are only executed at particular points of time they can share an adaptive component with other system functions, which can significantly reduce the design costs. However, adaptivity adds another dimension of complexity into system design since the system behaviour changes during the course of adaptation. This imposes additional requirements on the design process, in particular system verification. In this paper we illustrate how adaptivity is treated as first-class citizen inside the ForSyDe design framework. ForSyDe is a transformational system design methodology, where an initial abstract system model is refined by the application of semantic-preserving and non-semantic preserving design transformations into a detailed model that can be mapped to an implementation. Since ForSyDe is based on the functional paradigm we can model adaptivity by using functions as signal values, which we use as the base for our concept of adaptive processes. Depending on the level of adaptivity we categorise four classes of adaptive process, spanning from parameter adaptive to interface adaptive process. We illustrate our concepts by two typical examples for adaptivity, where we also show the application of design transformations.
  •  
97.
  • Sander, Ingo, 1964- (author)
  • System Modeling and Design Refinement in ForSyDe
  • 2003
  • Doctoral thesis (other academic/artistic)abstract
    • Advances in microelectronics allow the integration of more andmore functionality on a single chip. Emerging system-on-a-chiparchitectures include a large amount of heterogeneous componentsand are of increasing complexity. Applications using thesearchitectures require many low-level details in order to yield anefficient implementation. On the other hand constanttime-to-market pressure on electronic systems demands a shortdesign process that allows to model a system at a highabstraction level, not taking low-level implementation detailsinto account. Clearly there is a significant abstraction gapbetween an ideal model for specification and another one forimplementation. This abstraction gap has to be addressed bymethodologies for electronic system design.This thesis presents the ForSyDe (Formal System Design)methodology, which has been developed with the objective to movesystem design to a higher level of abstraction and to bridge theabstraction gap by transformational design refinement. ForSyDe isbased on carefully selected formal foundations. The initialspecification model uses a synchronous model of computation,which separates communication from computation and has anabstract notion of time. ForSyDe uses the concept of processconstructors to implement the synchronous model, to allow fordesign transformation and the mapping of a refined model onto thetarget architecture. The specification model is refined into adetailed implementation model by the stepwise application ofwell-defined design transformation rules. These rules are eithersemantic preserving or they inflict a design decision modifyingthe semantics. These design decisions are used to introduce thelow-level implementation details that are needed for an efficientimplementation. The implementation model is mapped onto thecomponents of the target architecture. At present ForSyDe modelscan be mapped onto VHDL or C/C++ in order to allow commercialtools to generate custom hardware or sequential software. Thethesis uses a digital equalizer to illustrate the concepts andpotential of ForSyDe.Electronic System Design, Hardware/Software Co-Design,Electrical Engineering
  •  
98.
  • Sander, Ingo, et al. (author)
  • System modeling and transformational design refinement in ForSyDe
  • 2004
  • In: IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems. - : Institute of Electrical and Electronics Engineers (IEEE). - 0278-0070 .- 1937-4151. ; 23:1, s. 17-32
  • Journal article (peer-reviewed)abstract
    • The scope, of the Formal System Design (ForSyDe) methodology is high-level modeling and refinement of systems-on-a-chip and embedded systems. Starting with a formal specification model, that captures the functionality of the system at a high abstraction level, it provides formal design-transformation methods for a transparent refinement process of the system model into an implementation model that is optimized for synthesis. The main contribution of this paper is the ForSyDe modeling technique and the formal treatment of transformational design refinement. We introduce process constructors, that cleanly separate the computation part of a process from the synchronization and communication part. We develop the characteristic function for each process type and use it to define semantic preserving and design decision transformations. These transformations are characterized by name, the format of the original process network, the transformed process network, and a design implication. The implication expresses the relation between original and transformed process network by means of the characteristic function. The objective of the refinement process is a model that can be implemented cost efficiently. To this end, process constructors and processes have a hardware and software interpretation which shall facilitate accurate performance and cost estimations. In a study of a digital equalizer example, we illustrate the modeling and refinement process and focus in particular on refinement of the clock domain, communication refinement, and resource sharing.
  •  
99.
  •  
100.
  •  
Skapa referenser, mejla, bekava och länka
  • Result 51-100 of 125
Type of publication
conference paper (87)
journal article (18)
doctoral thesis (7)
book chapter (5)
reports (4)
other publication (2)
show more...
licentiate thesis (2)
show less...
Type of content
peer-reviewed (102)
other academic/artistic (22)
pop. science, debate, etc. (1)
Author/Editor
Sander, Ingo (86)
Jantsch, Axel (43)
Öberg, Johnny (17)
Sander, Ingo, Profes ... (17)
Sander, Ingo, 1964- (16)
Attarzadeh-Niaki, Se ... (14)
show more...
Ungureanu, George (13)
Zhu, Jun (9)
Lu, Zhonghai (8)
Rosvall, Kathrin (8)
Hemani, Ahmed (7)
Jordao, Rodolfo (7)
Herrera, Fernando (6)
Attarzadeh-Niaki, S. ... (5)
Attarzadeh Niaki, Se ... (4)
Sander, Ingo, Docent (4)
Kumar, Shashi (3)
Moghaddami Khalilzad ... (3)
Mikulcak, Marcus (3)
Palermo, G. (3)
Ellervee, Peeter (3)
Svantesson, Bengt (3)
Villar, Eugenio (3)
Herrholz, Andreas (3)
Jantsch, A. (2)
Schreiner, S. (2)
Fuglesang, Christer, ... (2)
Mubeen, Saad (2)
Persson, M (2)
Behnam, Moris, 1973- (2)
Chen, Rui (2)
Robino, Francesco (2)
Jakobsen, M. K. (2)
Beserra, G. S. (2)
Becker, Matthias, Dr ... (2)
Söderquist, Ingemar (2)
Bruhn, Fredrik (2)
Ekman, Mats (2)
Bonna, Ricardo (2)
Loubach, Denis S. (2)
Herrera, F. (2)
Castañeda Lozano, Ro ... (2)
Hjort Blindell, Gabr ... (2)
Paone, E. (2)
de Medeiros, Jose. E ... (2)
Ekblad, J. (2)
Mohammadat, Tage (2)
Grüttner, K. (2)
Söderquist, I. (2)
Grimm, Christoph (2)
show less...
University
Royal Institute of Technology (123)
Mälardalen University (5)
RISE (2)
Lund University (1)
Mid Sweden University (1)
Chalmers University of Technology (1)
show more...
Swedish University of Agricultural Sciences (1)
show less...
Language
English (125)
Research subject (UKÄ/SCB)
Engineering and Technology (104)
Natural sciences (22)
Social Sciences (1)

Year

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Close

Copy and save the link in order to return to this view