SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Jenihhin M.) "

Sökning: WFRF:(Jenihhin M.)

  • Resultat 1-6 av 6
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Taheri, M., et al. (författare)
  • DeepAxe : A Framework for Exploration of Approximation and Reliability Trade-offs in DNN Accelerators
  • 2023
  • Ingår i: Proceedings - International Symposium on Quality Electronic Design, ISQED. - : IEEE Computer Society. - 9798350334753
  • Konferensbidrag (refereegranskat)abstract
    • While the role of Deep Neural Networks (DNNs) in a wide range of safety-critical applications is expanding, emerging DNNs experience massive growth in terms of computation power. It raises the necessity of improving the reliability of DNN accelerators yet reducing the computational burden on the hardware platforms, i.e. reducing the energy consumption and execution time as well as increasing the efficiency of DNN accelerators. Therefore, the trade-off between hardware performance, i.e. area, power and delay, and the reliability of the DNN accelerator implementation becomes critical and requires tools for analysis.In this paper, we propose a framework DeepAxe for design space exploration for FPGA-based implementation of DNNs by considering the trilateral impact of applying functional approximation on accuracy, reliability and hardware performance. The framework enables selective approximation of reliability-critical DNNs, providing a set of Pareto-optimal DNN implementation design space points for the target resource utilization requirements. The design flow starts with a pre-trained network in Keras, uses an innovative high-level synthesis environment DeepHLS and results in a set of Pareto-optimal design space points as a guide for the designer. The framework is demonstrated on a case study of custom and state-of-the-art DNNs and datasets. 
  •  
2.
  • Ahmadilivani, M. H., et al. (författare)
  • A Systematic Literature Review on Hardware Reliability Assessment Methods for Deep Neural Networks
  • 2024
  • Ingår i: ACM Computing Surveys. - : ASSOC COMPUTING MACHINERY. - 0360-0300 .- 1557-7341. ; 56:6
  • Tidskriftsartikel (refereegranskat)abstract
    • Artificial Intelligence (AI) and, in particular, Machine Learning (ML), have emerged to be utilized in various applications due to their capability to learn how to solve complex problems. Over the past decade, rapid advances in ML have presented Deep Neural Networks (DNNs) consisting of a large number of neurons and layers. DNN Hardware Accelerators (DHAs) are leveraged to deploy DNNs in the target applications. Safety-critical applications, where hardware faults/errors would result in catastrophic consequences, also benefit from DHAs. Therefore, the reliability of DNNs is an essential subject of research. In recent years, several studies have been published accordingly to assess the reliability of DNNs. In this regard, various reliability assessment methods have been proposed on a variety of platforms and applications. Hence, there is a need to summarize the state-of-the-art to identify the gaps in the study of the reliability of DNNs. In this work, we conduct a Systematic Literature Review (SLR) on the reliability assessment methods of DNNs to collect relevant research works as much as possible, present a categorization of them, and address the open challenges. Through this SLR, three kinds of methods for reliability assessment of DNNs are identified, including Fault Injection (FI), Analytical, and Hybrid methods. Since the majority of works assess the DNN reliability by FI, we characterize different approaches and platforms of the FI method comprehensively. Moreover, Analytical and Hybrid methods are propounded. Thus, different reliability assessment methods for DNNs have been elaborated on their conducted DNN platforms and reliability evaluation metrics. Finally, we highlight the advantages and disadvantages of the identified methods and address the open challenges in the research area. We have concluded that Analytical and Hybrid methods are light-weight yet sufficiently accurate and have the potential to be extended in future research and to be utilized in establishing novel DNN reliability assessment frameworks.
  •  
3.
  • Ahmadilivani, M. H., et al. (författare)
  • Enhancing Fault Resilience of QNNs by Selective Neuron Splitting
  • 2023
  • Ingår i: AICAS 2023 - IEEE International Conference on Artificial Intelligence Circuits and Systems, Proceeding. - : Institute of Electrical and Electronics Engineers Inc.. - 9798350332674
  • Konferensbidrag (refereegranskat)abstract
    • The superior performance of Deep Neural Networks (DNNs) has led to their application in various aspects of human life. Safety-critical applications are no exception and impose rigorous reliability requirements on DNNs. Quantized Neural Networks (QNNs) have emerged to tackle the complexity of DNN accelerators, however, they are more prone to reliability issues.In this paper, a recent analytical resilience assessment method is adapted for QNNs to identify critical neurons based on a Neuron Vulnerability Factor (NVF). Thereafter, a novel method for splitting the critical neurons is proposed that enables the design of a Lightweight Correction Unit (LCU) in the accelerator without redesigning its computational part.The method is validated by experiments on different QNNs and datasets. The results demonstrate that the proposed method for correcting the faults has a twice smaller overhead than a selective Triple Modular Redundancy (TMR) while achieving a similar level of fault resiliency. 
  •  
4.
  • Taheri, M., et al. (författare)
  • AdAM : Adaptive Fault-Tolerant Approximate Multiplier for Edge DNN Accelerators
  • 2024
  • Ingår i: Proceedings of the European Test Workshop. - : Institute of Electrical and Electronics Engineers (IEEE). - 9798350349320
  • Konferensbidrag (refereegranskat)abstract
    • Multiplication is the most resource-hungry operation in the neural network's processing elements. In this paper, we propose an architecture of a novel adaptive fault-tolerant approximate multiplier tailored for ASIC-based DNN accelerators. AdAM employs an adaptive adder relying on an unconventional use of the leading one position value of the inputs for fault detection through the optimization of unutilized adder resources. The proposed architecture uses a lightweight fault mitigation technique that sets the detected faulty bits to zero. The hardware resource utilization and the DNN accelerator's reliability metrics are used to compare the proposed solution against the triple modular redundancy (TMR) in multiplication, unprotected exact multiplication, and unprotected approximate multiplication. It is demonstrated that the proposed architecture enables a multiplication with a reliability level close to the multipliers protected by TMR utilizing 63.54% less area and having 39.06% lower power-delay product compared to the exact multiplier.
  •  
5.
  • Taheri, M., et al. (författare)
  • SAFFIRA : a Framework for Assessing the Reliability of Systolic-Array-Based DNN Accelerators
  • 2024
  • Ingår i: 2024 27th International Symposium on Design & Diagnostics of Electronic Circuits & Systems (DDECS). - : Institute of Electrical and Electronics Engineers Inc.. - 9798350359343 ; , s. 19-24
  • Konferensbidrag (refereegranskat)abstract
    • Systolic array has emerged as a prominent archi-tecture for Deep Neural Network (DNN) hardware accelerators, providing high-throughput and low-latency performance essen-tial for deploying DNNs across diverse applications. However, when used in safety-critical applications, reliability assessment is mandatory to guarantee the correct behavior of DNN accelerators. While fault injection stands out as a well-established practical and robust method for reliability assessment, it is still a very time-consuming process. This paper addresses the time efficiency issue by introducing a novel hierarchical software-based hardware-aware fault injection strategy tailored for systolic array-based DNN accelerators. The uniform Recurrent Equations system is used for software modeling of the systolic-array core of the DNN accelerators. The approach demonstrates a reduction of the fault injection time up to 3 × compared to the state-of-the-art hybrid (software/hardware) hardware-aware fault injection frameworks and more than 2000 × compared to RT-level fault injection frameworks - without compromising accuracy. Additionally, we propose and evaluate a new reliability metric through experimental assessment. The performance of the framework is studied on state-of-the-art DNN benchmarks.
  •  
6.
  • Jervan, Gert, et al. (författare)
  • Test time minimization for hybrid BIST of core-based systems
  • 2006
  • Ingår i: Journal of Computer Science and Technology. - : Springer Science and Business Media LLC. - 1000-9000 .- 1860-4749. ; 21:6, s. 907-912
  • Tidskriftsartikel (refereegranskat)abstract
    • This paper presents a solution to the test time minimization problem for core-based systems. We assume a hybrid BIST approach, where a test set is assembled, for each core, from pseudorandom test patterns that are generated online, and deterministic test patterns that are generated off-line and stored in the system. In this paper we propose an iterative algorithm to find the optimal combination of pseudorandom and deterministic test sets of the whole system, consisting of multiple cores, under given memory constraints, so that the total test time is minimized. Our approach employs a fast estimation methodology in order to avoid exhaustive search and to speed-up the calculation process. Experimental results have shown the efficiency of the algorithm to find near optimal solutions. © Springer Science + Business Media, Inc. 2006.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-6 av 6

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy