SwePub
Sök i LIBRIS databas

  Utökad sökning

WFRF:(Kreutzer A.)
 

Sökning: WFRF:(Kreutzer A.) > MLComp :

MLComp : A Methodology for Machine Learning-based Performance Estimation and Adaptive Selection of Pareto-Optimal Compiler Optimization Sequences

Colucci, A. (författare)
Juhasz, D. (författare)
Mosbeck, M. (författare)
visa fler...
Marchisio, A. (författare)
Rehman, S. (författare)
Kreutzer, M. (författare)
Nadbath, G. (författare)
Jantsch, A. (författare)
Shafique, M. (författare)
visa färre...
Institute of Electrical and Electronics Engineers Inc. 2021
2021
Engelska.
Ingår i: Proceedings -Design, Automation and Test in Europe, DATE. - : Institute of Electrical and Electronics Engineers Inc.. - 9783981926354 ; , s. 108-113
  • Konferensbidrag (refereegranskat)
Abstract Ämnesord
Stäng  
  • Embedded systems have proliferated in various consumer and industrial applications with the evolution of Cyber-Physical Systems and the Internet of Things. These systems are subjected to stringent constraints so that embedded software must be optimized for multiple objectives simultaneously, namely reduced energy consumption, execution time, and code size. Compilers offer optimization phases to improve these metrics. However, proper selection and ordering of them depends on multiple factors and typically requires expert knowledge. State-of-the-art optimizers facilitate different platforms and applications case by case, and they are limited by optimizing one metric at a time, as well as requiring a time-consuming adaptation for different targets through dynamic profiling. To address these problems, we propose the novel MLComp methodology, in which optimization phases are sequenced by a Reinforcement Learning-based policy. Training of the policy is supported by Machine Learning-based analytical models for quick performance estimation, thereby drastically reducing the time spent for dynamic profiling. In our framework, different Machine Learning models are automatically tested to choose the best-fitting one. The trained Performance Estimator model is leveraged to efficiently devise Reinforcement Learning-based multi-objective policies for creating quasi-optimal phase sequences. Compared to state-of-the-art estimation models, our Performance Estimator model achieves lower relative error (< 2%) with up to 50 × faster training time over multiple platforms and application domains. Our Phase Selection Policy improves execution time and energy consumption of a given code by up to 12% and 6%, respectively. The Performance Estimator and the Phase Selection Policy can be trained efficiently for any target platform and application domain. © 2021 EDAA.

Nyckelord

Advanced Analytics
Codes (symbols)
Embedded systems
Energy policy
Energy utilization
Firmware
Pareto principle
Program compilers
Reinforcement learning
Adaptive selection
Compiler optimizations
Machine learning models
Multiple platforms
Multiple-objectives
Performance estimation
Performance estimator
Stringent constraints
Learning systems

Publikations- och innehållstyp

ref (ämneskategori)
kon (ämneskategori)

Hitta via bibliotek

Till lärosätets databas

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy