SwePub
Sök i SwePub databas

  Extended search

Träfflista för sökning "WFRF:(Lisper Björn) "

Search: WFRF:(Lisper Björn)

  • Result 1-50 of 123
Sort/group result
   
EnumerationReferenceCoverFind
1.
  • Addazi, Lorenzo, et al. (author)
  • Executable modelling for highly parallel accelerators
  • 2019
  • In: Proceedings - 2019 ACM/IEEE 22nd International Conference on Model Driven Engineering Languages and Systems Companion, MODELS-C 2019. - : Institute of Electrical and Electronics Engineers Inc.. - 9781728151250 ; , s. 318-321
  • Conference paper (peer-reviewed)abstract
    • High-performance embedded computing is developing rapidly since applications in most domains require a large and increasing amount of computing power. On the hardware side, this requirement is met by the introduction of heterogeneous systems, with highly parallel accelerators that are designed to take care of the computation-heavy parts of an application. There is today a plethora of accelerator architectures, including GPUs, many-cores, FPGAs, and domain-specific architectures such as AI accelerators. They all have their own programming models, which are typically complex, low-level, and involve explicit parallelism. This yields error-prone software that puts the functional safety at risk, unacceptable for safety-critical embedded applications. In this position paper we argue that high-level executable modelling languages tailored for parallel computing can help in the software design for high performance embedded applications. In particular, we consider the data-parallel model to be a suitable candidate, since it allows very abstract parallel algorithm specifications free from race conditions. Moreover, we promote the Action Language for fUML (and thereby fUML) as suitable host language.
  •  
2.
  • Altenbernd, Peter, et al. (author)
  • Automatic Generation of Timing Models for Timing Analysis of High-Level Code
  • 2011
  • In: 19th International Conference on Real-Time and Network Systems (RTNS2011).
  • Conference paper (peer-reviewed)abstract
    • Traditional timing analysis is applied only in the late stages of embedded system software development, when the hardware is available and the code is compiled and linked. However, preliminary timing estimates are often needed already in early stages of system development, both for hard and soft real-time systems. If the hardware is not yet fully accessible, or the code is not yet ready to compile or link, then the timing estimation must be done for the source code rather than for the binary. This paper describes how source-level timing models can be derived automatically for given combinations of hardware architecture and compiler. The models are identified from measured execution times for a set of synthetic "training programs" compiled for the hardware platform in question. The models can be used to derive source-level WCET estimates, as well as for estimating the execution times for single program runs. Our experiments indicate that the models can predict the execution times of the final, compiled code with a deviation up to 20%.
  •  
3.
  • Altenbernd, Peter, et al. (author)
  • Early execution time-estimation through automatically generated timing models
  • 2016
  • In: Real-time systems. - : Springer Science and Business Media LLC. - 0922-6443 .- 1573-1383. ; 52:6, s. 731-760
  • Journal article (peer-reviewed)abstract
    • Traditional timing analysis, such as worst-case execution time analysis, is normally applied only in the late stages of embedded system software development, when the hardware is available and the code is compiled and linked. However, preliminary timing estimates are often needed in early stages of system development as an essential prerequisite for the configuration of the hardware setup and dimensioning of the system. During this phase the hardware is often not available, and the code might not be ready to link. This article describes an approach to predict the execution time of software through an early, source-level timing analysis. A timing model for source code is automatically derived from a given combination of hardware architecture and compiler. The model is identified from measured execution times for a set of synthetic training programs, compiled for the hardware platform in question. It can be used to estimate the execution time for code running on the platform: the estimation is then done directly from the source code, without compiling and running it. Our experiments show that, using this model, we can predict the execution times of the final, compiled code surprisingly well. For instance, we achieve an average deviation of 8 % for a set of benchmark programs for the ARM7 architecture.
  •  
4.
  • Altmeyer, Sebastian, et al. (author)
  • Parametric timing analysis for complex architectures
  • 2008
  • In: Proceedings - 14th IEEE International Conference on Embedded and Real-Time Computing Systems and Applications, RTCSA 2008. - 9780769533490 ; , s. 367-376
  • Conference paper (peer-reviewed)abstract
    • Hard real-time systems have stringent timing constraints expressed in units of time. To ensure that a task finishes within its time-frame, the designer of such a system must be able to derive upper bounds on the task's worst-case execution time (WCET). To compute such upper bounds, timing analyses are used. These analyses require that information such as bounds on the maximum numbers of loop iterations are known statically, i.e. during design time. Parametric timing analysis softens these requirements: it yields symbolic formulas instead of single numeric values representing the upper bound on the task's execution time.In this paper, we present a new parametric timing analysis that is able to derive safe and precise results. Our method determines what the parameters of the program are, constructs parametric loop bounds, takes processor behaviour into account and attains a formula automatically. In the end, we present tests to show that the precision and runtime of our analysis are very close to those of numeric timing analysis.
  •  
5.
  • Altmeyer, S., et al. (author)
  • WCET and mixed-criticality : What does confidence in WCET estimations depend upon?
  • 2015
  • In: OpenAccess Series in Informatics. - 9783939897958 ; , s. 65-74
  • Conference paper (peer-reviewed)abstract
    • Mixed-criticality systems integrate components of different criticality. Different criticality levels require different levels of confidence in the correct behavior of a component. One aspect of correctness is timing. Confidence in worst-case execution time (WCET) estimates depends on the process by which they have been obtained. A somewhat naive view is that static WCET analyses determines safe bounds in which we can have absolute confidence, while measurement-based approaches are inherently unreliable. In this paper, we refine this view by exploring sources of doubt in the correctness of both static and measurement-based WCET analysis.
  •  
6.
  • Barkah, Dani, et al. (author)
  • Evaluation of Automatic Flow Analysis for WCET Calculation on Industrial Real-Time System Code
  • 2008
  • In: Proceedings - Euromicro Conference on Real-Time Systems, 2008. - 9780769532981 ; , s. 331-340
  • Conference paper (peer-reviewed)abstract
    • A static Worst-Case Execution Time (WCET) analysis derives upper bounds for the execution times of programs. Such analysts requires information about the possible program flows. The current practice is to provide this information manually, which can be laborious and error-prone. An alternative is to derive this information through an automated flow analysis. In this article, we present a case study where an automatic flowanalysis method was tested on industrial real-time system code. The same code was the subject of an earlier WCET case study, where it was analysed using manual annotations for the flow information. The purpose of the current study was to see to which extent the same flow information could be found automatically. The results show that for the most part this is indeed possible, and we could derive comparable WCET estimates using the automatically generated flow information. In addition, valuable insights were gained on what is needed to make flow analysis methods work on real production code. 
  •  
7.
  • Bohlin, Markus, 1976- (author)
  • A Study of Combinatorial Optimization Problems in Industrial Computer Systems
  • 2009
  • Doctoral thesis (other academic/artistic)abstract
    • A combinatorial optimization problem is an optimization problem where the number of possible solutions are finite and grow combinatorially with the problem size. Combinatorial problems exist everywhere in industrial systems. This thesis focuses on solving three such problems which arise within two different areas where industrial computer systems are often used. Within embedded systems and real-time systems, we investigate the problems of allocating stack memory for an system where a shared stacks may be used, and of estimating the highest response time of a task in a system of industrial complexity. We propose a number of different algorithms to compute safe upper bounds on run-time stack usage whenever the system supports stack sharing. The algorithms have in common that they can exploit commonly-available information regarding timing behaviour of the tasks in the system. Given upper bounds on the individual stack usage of the tasks, it is possible to estimate the worst-case stack behaviour by analysing the possible and impossible preemption patterns. Using relations on offset and precedences, we form a preemption graph, which is further analysed to find safe upper-bounds on the maximal preemptions chain in the system. For the special case where all tasks exist in a single static schedule and share a single stack, we propose a polynomial algorithm to solve the problem. For generalizations of this problem, we propose an exact branch-and-bound algorithm for smaller problems and a polynomial heuristic algorithm for cases where the branch-and-bound algorithm fails to find a solution in reasonable time. All algorithms are evaluated in comprehensive experimental studies. The polynomial algorithm is implemented and shipped in the developer tool set for a commercial real-time operating system, Rubus OS. The second problem we study in the thesis is how to estimate the highest response time of a specified task in a complex industrial real-time system. The response-time analysis is done using a best-effort approach, where a detailed model of the system is simulated on input constructed using a local search procedure. In an evaluation on three different systems we can see that the new algorithm were able to produce higher response times much faster than what has previously been possible. Since the analysis is based on simulation and measurement, the results are not safe in the sense that they are always higher or equal to the true response time of the system. The value of the method lies instead in that it makes it possible to analyse complex industrial systems which cannot be analysed accurately using existing safe approaches. The third problem is in the area of maintenance planning, and focus on how to dynamically plan maintenance for industrial systems. Within this area we have focused on industrial gas turbines and rail vehicles.  We have developed algorithms and a planning tool which can be used to plan maintenance for gas turbines and other stationary machinery. In such problems, it is often the case that performing several maintenance actions at the same time is beneficial, since many of these jobs can be done in parallel, which reduces the total downtime of the unit. The core of the problem is therefore how to (or how not to) group maintenance activities so that a composite cost due to spare parts, labor and loss of production due to downtime is minimized. We allow each machine to have individual schedules for each component in the system. For rail vehicles, we have evaluated the effect of replanning maintenance in the case where the component maintenance deadline is set to reflect a maximum risk of breakdown in a Gaussian failure distribution. In such a model, we show by simulation that replanning of maintenance can reduce the number of maintenance stops when the variance and expected value of the distribution are increased.  For the gas turbine maintenance planning problem, we have evaluated the planning software on a real-world scenario from the oil and gas industry and compared it to the solutions obtained from a commercial integer programming solver. It is estimated that the availability increase from using our planning software is between 0.5 to 1.0 %, which is substantial considering that availability is currently already at 97-98 %.
  •  
8.
  • Broman, David, 1977- (author)
  • Meta-Languages and Semantics for Equation-Based Modeling and Simulation
  • 2010
  • Doctoral thesis (other academic/artistic)abstract
    • Performing computational experiments on mathematical models instead of building and testing physical prototypes can drastically reduce the develop cost for complex systems such as automobiles, aircraft, and powerplants. In the past three decades, a new category of equation-based modeling languages has appeared that is based on acausal and object-oriented modeling principles, enabling good reuse of models.  However, the modeling languages within this category have grown to be large and complex, where the specifications of the language's semantics are informally defined, typically described in natural languages. The lack of a formal semantics makes these languages hard to interpret unambiguously and to reason about. This thesis concerns the problem of designing the semantics of such equation-based modeling languages in a way that allows formal reasoning and increased correctness. The work is presented in two parts.In the first part we study the state-of-the-art modeling language Modelica.  We analyze the concepts of types in Modelica and conclude that there are two kinds of type concepts: class types and object types. Moreover, a concept called structural constraint delta is proposed, which is used for isolating the faults of an over- or under-determined model.In the second part, we introduce a new research language called the Modeling Kernel Language (MKL). By introducing the concept of higher-order acausal models (HOAMs), we show that it is possible to create expressive modeling libraries in a manner analogous to Modelica, but using a small and simple language concept. In contrast to the current state-of-the-art modeling languages, the semantics of how to use the models, including meta operations on models, are also specified in MKL libraries. This enables extensible formal executable specifications where important language features are expressed through libraries rather than by adding completely new language constructs. MKL is a statically typed language based on a typed lambda calculus. We define the core of the language formally using operational semantics and prove type safety.  An MKL interpreter is implemented and verified in comparison with a Modelica environment.
  •  
9.
  • Bygde, Stefan, et al. (author)
  • An Efficient Algorithm for Parametric WCET Calculation
  • 2009
  • In: 2009 15TH IEEE INTERNATIONAL CONFERENCE ON EMBEDDED AND REAL-TIME COMPUTING SYSTEMS AND APPLICATIONS, PROCEEDINGS. - LOS ALAMITOS : IEEE COMPUTER SOC. - 9780769537870 ; , s. 13-21
  • Conference paper (peer-reviewed)abstract
    • Static WCET analysis is a process dedicated to derive a safe upper bound of the worst-case execution time of a program. In many real-time systems, however, a constant global WCET estimate is not always so useful since a program may behave very differently depending on its configuration or mode. A parametric WCET analysis derives the upper bound as formula rather than a constant. This paper presents a new efficient algorithm that can obtain a safe parametric estimate of the WCET of a program. This algorithm is evaluated on a large set of benchmarks and compared to a previous approach to parametric WCET calculation. The evaluation shows that the new algorithm, to the cost of some imprecision, scales much better and can handle more realistic programs than the previous approach.
  •  
10.
  • Bygde, Stefan, et al. (author)
  • An efficient algorithm for parametric WCET calculation
  • 2011
  • In: Journal of systems architecture. - : Elsevier BV. - 1383-7621 .- 1873-6165. ; 57:6, s. 614-624
  • Journal article (peer-reviewed)abstract
    • Static WCET analysis is a process dedicated to derive a safe upper bound of the worst-case execution time of a program. In many real-time systems, however, a constant global WCET estimate is not always so useful since a program may behave very differently depending on its configuration or mode. A parametric WCET analysis derives the upper bound as a formula rather than a constant. This paper presents a new algorithm that can obtain a safe parametric estimate of the WCET of a program. This algorithm is evaluated on a large set of benchmarks and compared to a previous approach to parametric WCET calculation. The evaluation shows that the new algorithm, to the cost of some imprecision, scales much better and can handle more realistic programs than the previous approach.
  •  
11.
  • Bygde, Stefan, et al. (author)
  • Fully Bounded Polyhedral Analysis of Integers with Wrapping
  • 2011
  • In: Electronic Notes in Theoretical Computer Science. - : Elsevier BV. - 1571-0661. ; 288, s. 3-13
  • Journal article (peer-reviewed)abstract
    • analysis technique to discover linear relationships among variables in a program. However, the classical way of performing polyhedral analysis does not model the fact that values typically are stored as fixed-size binary strings and usually have a wrap-around semantics in the case of overflows. In embedded systems where 16-bit or even 8-bit processors are used, wrapping behaviour may even be used intentionally. Thus, to accurately and correctly analyse such systems, the wrapping has to be modelled. We present an approach to polyhedral analysis which derives polyhedra that are bounded in all dimensions and thus provides polyhedra that contain a finite number of integer points. Our approach uses a previously suggested wrapping technique for polyhedra but combines it in a novel way with limited widening, a suitable placement of widening points and restrictions on unbounded variables. We show how our method has the potential to significantly increase the precision compared to the previously suggested wrapping method.
  •  
12.
  • Bygde, Stefan, et al. (author)
  • Improved Precision in Polyhedral Analysis with Wrapping
  • 2017
  • In: Science of Computer Programming. - : Elsevier BV. - 0167-6423 .- 1872-7964. ; 133, s. 74-87
  • Journal article (peer-reviewed)abstract
    • Abstract interpretation using convex polyhedra is a common and powerful program analysis technique to discover linear relationships among variables in a program. However, the classical way of performing polyhedral analysis does not model the fact that values typically are stored as xed-size binary strings and usually have wrap-around semantics in the case of over ows. In resource-constrained embedded systems, where 8- or 16-bit processors are used, wrapping behaviour may even be used intentionally to save instructions and execution time. Thus, to analyse such systems accurately and correctly, the wrapping has to be modelled. We present an approach to polyhedral analysis which derives polyhedra that are bounded in all dimensions. Our approach is based on a previously suggested wrapping technique by Simon and King, combined with limited widening, a suitable placement of widening points and size-induced restrictions on unbounded variables. With this method, we can derive fully bounded polyhedra in every step of the analysis. We have implemented our method and Simon and King's method compared them. Our experiments show that for a suite of benchmark programs it gives at least as precise result as Simon and King's method. In some cases we obtain a significantly improved result.
  •  
13.
  • Bygde, Stefan, 1980- (author)
  • Parametric WCET Analysis
  • 2013
  • Doctoral thesis (other academic/artistic)abstract
    • In a real-time system, it is crucial to ensure that all tasks of the system hold their deadlines. A missed deadline in a real-time system means that the system has not been able to function correctly. If the system is safety critical, this could potentially lead to disaster. To ensure that all tasks keep their deadlines, the Worst-Case Execution Time (WCET) of these tasks has to be known.Static analysis analyses a safe model of the hardware together with the source or object code of a program to derive an estimate of the WCET. This estimate is guaranteed to be equal to or greater than the real WCET. This is done by making calculations which in all steps make sure that the time is exactly or conservatively estimated. In many cases, however, the execution time of a task or a program is highly dependent on the given input. Thus, the estimated worst case may correspond to some input or configuration which is rarely (or never) used in practice. For such systems, where execution time is highly input dependent, a more accurate timing analysis which take input into consideration is desired.In this thesis we present a method based on abstract interpretation and counting of semantic states of a program that gives a WCET in terms of some input to the program. This means that the WCET is expressed as a formula of the input rather than a constant. This means that once the input is known, the actual WCET may be more accurate than the absolute and global WCET. Our research also investigate how this analysis can be safe when arithmetic operations causes integers to wrap-around, where the common assumption in static analysis is that variables can take the value of any integer. Our method has been implemented as a prototype and as a part of a static WCET analysis tool in order to get experience with the method and to evaluate the different aspects. Our method shows that it is possible to obtain very complex and detailed information about the timing of a program, given its input.
  •  
14.
  • Bygde, Stefan, et al. (author)
  • Static Analysis of Bounded Polyhedra
  • 2011
  • Conference paper (peer-reviewed)abstract
    • We present a method for polyhedral abstract interpretation which derives fully bounded polyhedra for every step in the analysis. Contrary to classical polyhedral analysis, this method is sound for integer-valued variables stored as fixed-size binary strings; wrap-arounds are correctly modelled. Our work is based on earlier work by Axel Simon and Andy King but aims to significantly reduce the precision loss introduced in their method.
  •  
15.
  • Bygde, Stefan, 1980- (author)
  • Static WCET Analysis Based on Abstract Interpretation and Counting of Elements
  • 2010
  • Licentiate thesis (other academic/artistic)abstract
    • In a real-time system, it is crucial to ensure that all tasks of the system holdtheir deadlines. A missed deadline in a real-time system means that the systemhas not been able to function correctly. If the system is safety critical, this canlead to disaster. To ensure that all tasks keep their deadlines, the Worst-CaseExecution Time (WCET) of these tasks has to be known. This can be done bymeasuring the execution times of a task, however, this is inflexible, time consumingand in general not safe (i.e., the worst-casemight not be found). Unlessthe task is measured with all possible input combinations and configurations,which is in most cases out of the question, there is no way to guarantee that thelongest measured time actually corresponds to the real worst case.Static analysis analyses a safe model of the hardware together with thesource or object code of a program to derive an estimate of theWCET. This estimateis guaranteed to be equal to or greater than the real WCET. This is doneby making calculations which in all steps make sure that the time is exactlyor conservatively estimated. In many cases, however, the execution time of atask or a program is highly dependent on the given input. Thus, the estimatedworst case may correspond to some input or configuration which is rarely (ornever) used in practice. For such systems, where execution time is highly inputdependent, a more accurate timing analysis which take input into considerationis desired.In this thesis we present a framework based on abstract interpretation andcounting of possible semantic states of a program. This is a general methodof WCET analysis, which is language independent and platform independent.The two main applications of this framework are a loop bound analysis and aparametric analysis. The loop bound analysis can be used to quickly find upperbounds for loops in a program while the parametric framework provides aninput-dependent estimation of theWCET. The input-dependent estimation cangive much more accurate estimates if the input is known at run-time.
  •  
16.
  • Bygde, Stefan, et al. (author)
  • Towards an automatic parametric WCET analysis
  • 2008
  • In: OpenAccess Ser. Informatics. - 9783939897101 ; , s. 9-17
  • Conference paper (peer-reviewed)abstract
    • Static WCET analysis obtains a safe estimation of the WCET of a program. The timing behaviour of a program depends in many cases on input, and an analysis could take advantage of this information to produce a formula in input variables as estimation of the WCET, rather than a constant. A method to do this was suggested in [12]. We have implemented a working prototype of the method to evaluate its feasibility in practice. We show how to reduce complexity of the method and how to simplify parts of it to make it practical for implementation. The prototype implementation indicates that the method presented in [12] successfully can be implemented for a simple imperative language, mostly by using existing libraries.
  •  
17.
  • Carlson, Jan, et al. (author)
  • A resource-efficient event algebra
  • 2010
  • In: Science of Computer Programming. - : Elsevier BV. - 0167-6423 .- 1872-7964. ; 75:12, s. 1215-1234
  • Journal article (peer-reviewed)abstract
    • Events play many roles in computer systems, ranging from hardware interrupts, over event-based software architecture, to monitoring and managing of complex systems. In many applications, however, individual event occurrences are not the main point of concern, but rather the occurrences of certain event patterns. Such event patterns can be defined by means of an event algebra, i.e., expressions representing the patterns of interest are built from simple events and operators such as disjunction, sequence, etc. We propose a novel event algebra with intuitive operators (a claim which is supported by a number of algebraic properties). We also present an efficient detection algorithm that correctly detects any expression with bounded memory, which makes this algebra particularly suitable for resource-constrained applications such as embedded systems.
  •  
18.
  • Carlson, Jan, 1976- (author)
  • Event Pattern Detection for Embedded Systems
  • 2007
  • Doctoral thesis (other academic/artistic)abstract
    • Events play an important role in many computer systems, from small reactive embedded applications to large distributed systems. Many applications react to events generated by a graphical user interface or by external sensors that monitor the system environment, and other systems use events for communication and synchronisation between independent subsystems. In some applications, however, individual event occurrences are not the main point of concern. Instead, the system should respond to certain event patterns, such as "the start button being pushed, followed by a temperature alarm within two seconds". One way to specify such event patterns is by means of an event algebra with operators for combining the simple events of a system into specifications of complex patterns.This thesis presents an event algebra with two important characteristics. First, it complies with a number of algebraic laws, which shows that the algebra operators behave as expected. Second, any pattern represented by an expression in this algebra can be efficiently detected with bounded resources in terms of memory and time, which is particularly important when event pattern detection is used in embedded systems, where resource efficiency and predictability are crucial.In addition to the formal algebra semantics and an efficient detection algorithm, the thesis describes how event pattern detection can be used in real-time systems without support from the underlying operating system, and presents schedulability theory for such systems. It also describes how the event algebra can be combined with a component model for embedded system, to support high level design of systems that react to event patterns.
  •  
19.
  •  
20.
  • Ciccozzi, Federico, 1983-, et al. (author)
  • A Comprehensive Exploration of Languages for Parallel Computing
  • 2023
  • In: ACM Computing Surveys. - : ASSOC COMPUTING MACHINERY. - 0360-0300 .- 1557-7341. ; 55:2
  • Journal article (peer-reviewed)abstract
    • Software-intensive systems in most domains, from autonomous vehicles to health, are becoming predominantly parallel to efficiently manage large amount of data in short (even real-) time. There is an incredibly rich literature on languages for parallel computing, thus it is difficult for researchers and practitioners, even experienced in this very field, to get a grasp on them. With this work we provide a comprehensive, structured, and detailed snapshot of documented research on those languages to identify trends, technical characteristics, open challenges, and research directions. In this article, we report on planning, execution, and results of our systematic peer-reviewed as well as grey literature review, which aimed at providing such a snapshot by analysing 225 studies.
  •  
21.
  • Curuklu, Baran, 1969- (author)
  • A Canonical Model of the Primary Visual Cortex
  • 2005
  • Doctoral thesis (other academic/artistic)abstract
    • Ny datormodell visar hur hjärnan behandlar informationBaran Çürüklüs forskning handlar om att förstå hur syncentret i hjärnan fungerar. Detta är viktigt för forskningen inom neurovetenskap och artificiell intelligens.Under de senaste decennierna har hjärnforskningen visat att olika centra av hjärnbarken hos en och samma art har liknande struktur och att det finns stora likheter mellan olika arters hjärnbark. Dessa resultat tyder också på att nerv cellerna använder ett universellt språk när de kommunicerar med varandra. Dessutom verkar det finns generella regler som kan förklara hur hjärnan utvecklas och får sin slutliga form. En direkt konsekvens av dessa hypoteser är att Baran Çürüklüs forskning på syncentret kan ha stor inverkan på forskning på andra delar av hjärnan.Syncentret är den del av hjärnbarken som tar emot de inkommande signaler från ögat. Syncentret är en mycket viktig del av hjärnan och innehåller uppskattningsvis 40 % av hjärnbarkens nerv celler. Baran Çürüklü har i detalj kartlagt svarsegenskaperna hos nerv cellerna i den primära visuella hjärnbarken under hjärnans utvecklingsförlopp. Detta arbete bygger på upptäckten av Hubel och Wiesel om att nerv cellerna i den primära visuella hjärnbarken reagerar på kontrastkanter. Deras forskning har resulterat i feedforward modellen som är en viktig del av arbetet som har gett dem Nobelpriset i fysiologi/medicin (1981).Trots att denna modell har varit den mest refererade modellen i litteraturen så återstår fortfarande mycket forskning för att förstå nerv cellernas svarsegenskaper. Baran Çürüklüs modell kompletterar feedforward-modellen genom att bl.a. förklara hur hjärnan kan känna igen former under olika kontrastförhållanden. Modellen visar också hur omgivningen inverkar på syncentrets utvecklingsförlopp.
  •  
22.
  • Dodig-Crnkovic, Gordana, 1955- (author)
  • Investigations into Information Semantics and Ethics of Computing
  • 2006
  • Doctoral thesis (other academic/artistic)abstract
    • The recent development of the research field of Computing and Philosophy has triggered investigations into the theoretical foundations of computing and information.This thesis consists of two parts which are the result of studies in two areas of Philosophy of Computing (PC) and Philosophy of Information (PI) regarding the production of meaning (semantics) and the value system with applications (ethics).The first part develops a unified dual-aspect theory of information and computation, in which information is characterized as structure, and computation is the information dynamics. This enables naturalization of epistemology, based on interactive information representation and communication. In the study of systems modeling, meaning, truth and agency are discussed within the framework of the PI/PC unification.The second part of the thesis addresses the necessity of ethical judgment in rational agency illustrated by the problem of information privacy and surveillance in the networked society. The value grounds and socio-technological solutions for securing trustworthiness of computing are analyzed. Privacy issues clearly show the need for computing professionals to contribute to understanding of the technological mechanisms of Information and Communication Technology.The main original contribution of this thesis is the unified dual-aspect theory of computation/information. Semantics of information is seen as a part of the data-information-knowledge structuring, in which complex structures are self-organized by the computational processing of information. Within the unified model, complexity is a result of computational processes on informational structures. The thesis argues for the necessity of computing beyond the Turing-Church limit, motivated by natural computation, and wider by pancomputationalism and paninformationalism, seen as two complementary views of the same physical reality. Moreover, it follows that pancomputationalism does not depend on the assumption that the physical world on some basic level is digital. Contrary to many believes it is entirely compatible with dual (analogue/digital) quantum-mechanical computing.
  •  
23.
  • Drakenberg, N. Peter, et al. (author)
  • An Efficient Semi-Hierarchical Array Layout
  • 2001
  • In: Interaction between Compilers and Computer Architectures. - : Kluwer Academic Publishers. - 0792373707 ; , s. 21-43
  • Conference paper (peer-reviewed)abstract
    • For high-level programming languages, linear array layout (e.g., column major and row major orders) have de facto been the sole form of mapping array elements to memory. The increasingly deep and complex memory hierarchies present in current computer systems expose several deficiencies of linear array layouts. One such deficiency is that linear array layouts strongly favor locality in one index dimension of multidimensional arrays. Secondly, the exact mapping of array elements to cache locations depend on the array’s size, which effectively renders linear array layouts non-analyzable with respect to cache behavior. We present and evaluate an alternative, semi-hierarchical, array layout which differs from linear array layouts by being neutral with respect to locality in different index dimensions and by enabling accurate and precise analysis of cache behaviors at compile-time. Simulation results indicate that the proposed layout may exhibit vastly improved TLB behavior, leading to clearly measurable improvements in execution time, despite a lack of suitable hardware support for address computations. Cache behavior is formalized in terms of conflict vectors, and it is shown how to compute such conflict vectors at compile-time.
  •  
24.
  • Ermedahl, Andreas, et al. (author)
  • Deriving WCET Bounds by Abstract Execution
  • 2011
  • In: Proc. 11th International Workshop on Worst-Case Execution Time (WCET) Analysis (WCET 2011:). - 9781632663153 ; , s. 72-82
  • Conference paper (peer-reviewed)abstract
    • Standard static WCET analysis methods today are based on the IPET technique, where WCET estimation is formulated as an integer linear programming (ILP) problem subject to linear program flow constraints with an objective function derived from the hardware timing model. The estimate is then calculated by an ILP solver. The hardware cost model, as well as the program flow constraints, are often derived using a static program analysis framework such as abstract interpretation. An alternative idea to estimate the WCET is to add time as an explicit variable, incremented for each basic block in the code. The possible values of this variable can then be bound by a value analysis. We have implemented this idea by integrating the time estimation in our Abstract Execution method for calculating program flow constraints. This method is in principle a very detailed value analysis. As it computes intervals bounding variable values, it bounds both the BCET and the WCET. In addition, it derives the explicit execution paths through the program which correspond to the calculated BCET and WCET bounds. We have compared the precision and the analysis time with the traditional IPET technique for a number of benchmark programs, and show that the new method typically is capable of calculating as tight or even tighter WCET estimates in shorter time. Our current implementation can handle simple timing models with constant execution times for basic blocks and edges in the CFG, but it is straightforward to extend the method to more detailed hardware timing models.
  •  
25.
  • Ermedahl, Andreas, et al. (author)
  • Loop Bound Analysis based on a Combination of Program Slicing, Abstract Interpretation, and Invariant Analysis
  • 2007
  • In: OpenAccess Series in Informatics, Volume 6, 2007. - 9783939897057
  • Conference paper (peer-reviewed)abstract
    • Static Worst-Case Execution Time (WCET) analysis is a technique to derive upper bounds for the execution times of programs. Such bounds are crucial when designing and verifying real-time systems. A key component for static derivation of precise WCET estimates is upper bounds on the number of times different loops can be iterated. In this paper we present an approach for deriving upper loop bounds based on a combination of standard program analysis techniques. The idea is to bound the number of different states in the loop which can influence the exit conditions. Given that the loop terminates, this number provides an upper loop bound. An algorithm based on the approach has been implemented in our WCET analysis tool SWEET. We evaluate the algorithm on a number of standard WCET benchmarks, giving evidence that it is capable to derive valid bounds for many types of loops.
  •  
26.
  • Falk, H., et al. (author)
  • TACLeBench : A benchmark collection to support worst-case execution time research
  • 2016
  • In: OpenAccess Series in Informatics. - 9783959770255 ; , s. 2.1-2.10
  • Conference paper (peer-reviewed)abstract
    • Engineering related research, such as research on worst-case execution time, uses experimentation to evaluate ideas. For these experiments we need example programs. Furthermore, to make the research experimentation repeatable those programs shall be made publicly available. We collected open-source programs, adapted them to a common coding style, and provide the collection in open-source. The benchmark collection is called TACLeBench and is available from GitHub in version 1.9 at the publication date of this paper. One of the main features of TACLeBench is that all programs are self-contained without any dependencies on standard libraries or an operating system. 
  •  
27.
  • Faragardi, Hamid Reza, et al. (author)
  • A communication-aware solution framework for mapping AUTOSAR runnables on multi-core systems
  • 2014
  • In: 19th IEEE International Conference on Emerging Technologies and Factory Automation, ETFA 2014. - 9781479948468 ; , s. Article number 7005244-
  • Conference paper (peer-reviewed)abstract
    • An AUTOSAR-based software application contains a set of software components, each of which encapsulates a set of runnable entities. In fact, the mission of the system is fulfilled as result of the collaboration between the runnables. Several trends have recently emerged to utilize multi-core technology to run AUTOSAR-based software. Not only the overhead of communication between the runnables is one of the major performance bottlenecks in multi-core processors but it is also the main source of unpredictability in the system. Appropriate mapping of the runnables onto a set of tasks (called mapping process) along with proper allocation of the tasks to processing cores (called task allocation process) can significantly reduce the communication overhead. In this paper, three solutions are suggested, each of which comprises both the mapping and the allocation processes. The goal is to maximize key performance aspects by reducing the overall inter-runnable communication time besides satisfying given timing and precedence constraints. A large number of randomly generated experiments are carried out to demonstrate the efficiency of the proposed solutions.
  •  
28.
  • Faragardi, Hamid Reza, et al. (author)
  • A resource efficient framework to run automotive embedded software on multi-core ECUs
  • 2018
  • In: Journal of Systems and Software. - : Elsevier BV. - 0164-1212 .- 1873-1228. ; 139, s. 64-83
  • Journal article (peer-reviewed)abstract
    • The increasing functionality and complexity of automotive applications requires not only the use of more powerful hardware, e.g., multi-core processors, but also efficient methods and tools to support design decisions. Component-based software engineering proved to be a promising solution for managing software complexity and allowing for reuse. However, there are several challenges inherent in the intersection of resource efficiency and predictability of multi-core processors when it comes to running component-based embedded software. In this paper, we present a software design framework addressing these challenges. The framework includes both mapping of software components onto executable tasks, and the partitioning of the generated task set onto the cores of a multi-core processor. This paper aims at enhancing resource efficiency by optimizing the software design with respect to: 1) the inter-software-components communication cost, 2) the cost of synchronization among dependent transactions of software components, and 3) the interaction of software components with the basic software services. An engine management system, one of the most complex automotive sub-systems, is considered as a use case, and the experimental results show a reduction of up to 11.2% total CPU usage on a quad-core processor, in comparison with the common framework in the literature.
  •  
29.
  • Faragardi, Hamid Reza, et al. (author)
  • An efficient scheduling of AUTOSAR runnables to minimize communication cost in multi-core systems
  • 2014
  • In: 2014 7th International Symposium on Telecommunications, IST 2014. - 9781479953592 ; , s. 41-48
  • Conference paper (peer-reviewed)abstract
    • The AUTOSAR consortium has developed as the worldwide standard for automotive embedded software systems. From a processor perspective, AUTOSAR was originally developed for single-core processor platforms. Recent trends have raised the desire for using multi-core processors to run AUTOSAR software. However, there are several challenges in reaching a highly efficient and predictable design of AUTOSAR-based embedded software on multi-core processors. In this paper a solution framework comprising both the mapping of runnables onto a set of tasks and the scheduling of the generated task set on a multi-core processor is suggested. The goal of the work presented in this paper is to minimize the overall inter-runnable communication cost besides meeting all corresponding timing and precedence constraints. The proposed solution framework is evaluated and compared with an exhaustive method to demonstrate the convergence to an optimal solution. Since the exhaustive method is not applicable for large size instances of the problem, the proposed framework is also compared with a well-known meta-heuristic algorithm to substantiate the capability of the frameworks to scale up. The experimental results clearly demonstrate high efficiency of the solution in terms of both communication cost and average processor utilization.
  •  
30.
  • Faragardi, Hamid Reza, 1987- (author)
  • Optimizing Timing-Critical Cloud Resources in a Smart Factory
  • 2018
  • Doctoral thesis (other academic/artistic)abstract
    • This thesis addresses the topic of resource efficiency in the context of timing critical components that are used in the realization of a Smart Factory.The concept of the smart factory is a recent paradigm to build future production systems in a way that is both smarter and more flexible. When it comes to realization of a smart factory, three principal elements play a significant role, namely Embedded Systems, Internet of Things (IoT) and Cloud Computing. In a smart factory, efficient use of computing and communication resources is a prerequisite not only to obtain a desirable performance for running industrial applications, but also to minimize the deployment cost of the system in terms of the size and number of resources that are required to run industrial applications with an acceptable level of performance. Most industrial applications that are involved in smart factories, e.g., automation and manufacturing applications, are subject to a set of strict timing constraints that must be met for the applications to operate properly. Such applications, including underlying hardware and software components that are used to run the application, constitute a real-time system. In real-time systems, the first and major concern of the system designer is to provide a solution where all timing constraints are met. To do so we need a time-predictable IoT/Cloud Computing framework to deal with the real-time constraints that are inherent in industrial applications running in a smart factory. Afterwards, with respect to the time predictable framework, the number of required computing and communication resources can and should be optimized such that the deployed system is cost efficient. In this thesis, to investigate and present solutions that provide and improve the resource efficiency of computing and communication resources in a smart factory, we conduct research following three themes: (i) multi-core embedded processors, which are the key element in terms of computing components embedded in the machinery of a smart factory, (ii) cloud computing data centers, as the supplier of a massive data storage and a large computational power, and(iii) IoT, for providing the interconnection of computing components embedded in the objects of a smart factory. Each of these themes are targeted separately to optimize resource efficiency. For each theme, we identify key challenges when it comes to achieving a resource-efficient design of the system. We then formulate the problem and propose solutions to optimize the resource efficiency of the system, while satisfying all timing constraints reflected in the model. We then propose a comprehensive resource allocation mechanism to optimize the resource efficiency in the whole system while considering the characteristics of each of these research themes. The experimental results indicate a clear improvement when it comes to timing-critical IoT / Cloud Computing resources in a smart factory. At the level of multi-core embedded devices, the total CPU usage of a quad-core processor is shown to be improved by 11.2%. At the level of Cloud Computing, the number of cloud servers that are required to execute a given set of real-time applications is shown to be reduced by 25.5%. In terms of network components that are used to collect sensor data, our proposed approach reduces the total deployment cost of thesystem by 24%. In summary these results all contribute towards the realization of a future smart factory.
  •  
31.
  • Faragardi, Hamid Reza, et al. (author)
  • Towards a Communication-efficient Mapping of AUTOSAR Runnables on Multi-cores
  • 2013
  • Conference paper (peer-reviewed)abstract
    • Multi-core technology is recognized as a key component to develop new cost-efficient products. It can lead to reduction of the overall hardware cost through hardware consolidation. However, it also results in tremendous challenges related to the combination of predictability and performance. The AUTOSAR consortium has developed as the worldwide standard for automotive embedded software systems. One of the prominent aspects of this consortium is to support multi-core systems. In this paper, the ongoing work on addressing the challenge of achieving a resource efficient and predictable mapping of AUTOSAR runnables onto a multi-core system is discussed. The goal is to minimize the runnables’ communication cost besides meeting timing and precedence constraints of the runnables. The basic notion utilized in this research is to consider runnable granularity, which leads to an increased flexibility in allocating runnables to various cores, compared of task granularity in which all of the runnables hosted on a task should be allocated on the same core. This increased flexibility can potentially enhance communication cost. In addition, a heuristic algorithm is introduced to create a task set according to the mapping of runnables on the cores. In our current work, we are formulating the problem as an Integer Linear Programming (ILP). Therefore, conventional ILP solvers can be easily applied to derive a solution.
  •  
32.
  • Faxén, Karl-Filip, et al. (author)
  • Multicore computing--the state of the art
  • 2009
  • Reports (other academic/artistic)abstract
    • This document presents the current state of the art in multicore computing, in hardware and software, as well as ongoing activities, especially in Sweden. To a large extent, it draws on the presentations given at the Multicore Days 2008 organized by SICS, Swedish Multicore Initiative and Ericsson Software Research but the published literature and the experience of the authors has been equally important sources. It is clear that multicore processors will be with us for the foreseeable future; there seems to be no alternative way to provide substantial increases of microprocessor performance in the coming years. While processors with a few (2–8) cores are common today, this number is projected to grow as we enter the era of manycore computing. The road ahead for multicore and manycore hardware seems relatively clear, although some issues like the organization of the on-chip memory hierarchy remain to be settled. Multicore software is however much less mature, with fundamental questions of programming models, languages, tools and methodologies still outstanding.
  •  
33.
  • Felderer, Michael, 1978-, et al. (author)
  • Formal methods in industrial practice - Bridging the gap (track summary)
  • 2018
  • In: Lect. Notes Comput. Sci.. - Cham : Springer Verlag. - 9783030034269 ; , s. 77-81
  • Conference paper (peer-reviewed)abstract
    • Already for many decades, formal methods are considered to be the way forward to help the software industry to make more reliable and trustworthy software. However, despite this strong belief, and many individual success stories, no real change in industrial software development seems to happen. In fact, the software industry is moving fast forward itself, and the gap between what formal methods can achieve, and the daily software development practice does not seem to get smaller (and might even be growing).
  •  
34.
  • Gustafsson, Jan, et al. (author)
  • ALF - A Language for WCET Flow Analysis
  • 2009
  • In: OpenAccess Series in Informatics Volume 10, 2009. - 9783939897149
  • Conference paper (peer-reviewed)abstract
    • Static Worst-Case Execution Time (WCET) analysis derives upper bounds for the execution times of programs. Such bounds are crucial when designing and verifying real-time systems. A key component in static WCET analysis is the flow analysis, which derives bounds on the number of times different code entities can be executed. Examples of flow information derived by a flow analysis are loop bounds and infeasible paths. Flow analysis can be performed on source code, intermediate code, or binary code: for the latter, there is a proliferation of instruction sets. Thus, flow analysis must deal with many code formats. However, the basic flow analysis techniques are more or less the same regardless of the code format. Thus, an interesting option is to define a common code format for flow analysis, which also allows for easy translation from the other formats. Flow analyses for this common format will then be portable, in principle supporting all types of code formats which can be translated to this format. Further, a common format simplifies the development of flow analyses, since only one specific code format needs to be targeted. This paper presents such a common code format, the ALF language (ARTIST2 Language for WCET Flow Analysis).
  •  
35.
  • Gustafsson, Jan, et al. (author)
  • ALF (ARTIST2 Language for Flow Analysis) Specification
  • 2008
  • Reports (other academic/artistic)abstract
    • ALF (ARTIST2 Language for Flow Analysis) is a language intended to be used for flow analysis in conjunction with WCET(Worst Case Execution Time) analysis. ALF is designed to be possible to generate from a rich set of sources: linked binaries,source code, compiler intermediate formats, and possibly more.
  •  
36.
  • Gustafsson, Jan, et al. (author)
  • Algorithms for Infeasible Path Calculation
  • 2006
  • In: OpenAccess Series in InformaticsVolume 4, 2006. - 9783939897033
  • Conference paper (peer-reviewed)abstract
    • Static Worst-Case Execution Time (WCET) analysis is a technique to derive upper bounds for the execution times of programs. Such bounds are crucial when designing and verifying real-time systems. A key component in static WCET analysis is to derive flow information, such as loop bounds and infeasible paths. Such flow information can be provided as either as annotations by the user, can be automatically calculated by a flow analysis, or by a combination of both. To make the analysis as simple, automatic and safe as possible, this flow information should be calculated automatically with no or very limited user interaction. In this paper we present three novel algorithms to calculate infeasible paths. The algorithms are all designed to be simple and efficient, both in terms of generated flow facts and in analysis running time. The algorithms have been implemented and tested for a set of WCET benchmarks programs.
  •  
37.
  • Gustafsson, Jan, et al. (author)
  • ALL-TIMES - a European Project on Integrating Timing Technology
  • 2008
  • In: Communications in Computer and Information Science, Volume 17. - Berlin, Heidelberg : Springer Berlin Heidelberg. - 9783540884781 ; , s. 445-459
  • Conference paper (peer-reviewed)abstract
    • ALL-TIMES is a research project within the EC 7th Framework Programme. The project concerns embedded systems that are subject to safety, availability, reliability, and performance requirements. Increasingly, these requirements relate to correct timing. Consequently, the need for appropriate timing analysis methods and tools is growing rapidly. An increasing number of sophisticated and technically mature timing analysis tools and methods are becoming available commercially and in academia. However, tools and methods have historically been developed in isolation, and the potential users are missing a process-related and continuous tool- and methodology-support. Due to this fragmentation, the timing analysis tool landscape does not yet fully exploit its potential.The ALL-TIMES project aims at: combining independent research results into a consistent methodology, integrating available timing tools into a single framework, and developing new timing analysis methods and tools where appropriate.ALL-TIMES will enable interoperability of the various tools from leading commercial vendors and universities alike, and develop integrated tool chains using as well as creating open tool frameworks and interfaces. In order to evaluate the tool integrations, a number of industrial case studies will be performed.This paper describes the aims of the ALL-TIMES project, the partners, and the planned work.
  •  
38.
  • Gustafsson, Jan, et al. (author)
  • Approximate Worst-Case Execution Time Analysis for Early Stage Embedded Systems Development
  • 2009
  • In: SOFTWARE TECHNOLOGIES FOR EMBEDDED AND UBIQUITOUS SYSTEMS, PROCEEDINGS. - Berlin, Heidelberg : Springer. - 9783642102646 ; , s. 308-319
  • Book chapter (peer-reviewed)abstract
    • A Worst-Case Execution Time (WCET) analysis finds upper bounds for the execution time of programs. Reliable WCET estimates are essential in the development of safety-critical embedded systems, where failures to meet timing deadlines can have catastrophic consequences. Traditionally, WCET analysis is applied only in the late stages of embedded system software development. This is problematic, since WCET estimates are often needed already in early stages of system development, for example as inputs to various kinds of high-level embedded system engineering tools such as modelling and component frameworks, scheduling analyses, timed automata, etc. Early WCET estimates are also useful for selecting a suitable processor configuration (CPU, memory, peripherals, etc.) for the embedded system. If early WCET estimates are missing, many of these early design decisions have to be made using experience and ``gut feeling''. If the final executable violates the timing bounds assumed in earlier system development stages, it may result in costly system re-design. This paper presents a novel method to derive approximate WCET estimates at early stages of the software development process. The method is currently being implemented and evaluated. The method should be applicable to a large variety of software engineering tools and hardware platforms used in embedded system development, leading to shorter development times and more reliable embedded software.
  •  
39.
  • Gustafsson, Jan, et al. (author)
  • Automatic Derivation of Loop Bounds and Infeasible Paths for WCET Analysis using Abstract Execution
  • 2006
  • In: Proceedings - Real-Time Systems Symposium. - 0769527612 ; , s. 57-66
  • Conference paper (peer-reviewed)abstract
    • Static Worst-Case Execution Time (WCET) analysis is a technique to derive upper bounds for the execution times of programs. Such bounds are crucial when designing and verifying real-time systems. A key component for statically deriving safe and tight WCET bounds is information on the possible program flow through the program. Such flow information can be provided manually by user annotations, or automatically by a flow analysis. To make WCET analysis as simple and safe as possible, it should preferably be automatically derived, with no or very limited user interaction. In this paper we present a method for deriving such flow information called abstract execution. This method can automatically calculate loop bounds, bounds for including nested loops, as well as many types of infeasible paths. Our evaluations show that it can calculate WCET estimates automatically, without any user annotations, for a range of benchmark programs, and that our techniques for nested loops and infeasible paths sometimes can give substantially better WCET estimates than using loop bounds analysis only.
  •  
40.
  • Gustafsson, Jan, et al. (author)
  • Code Analysis for Temporal Predictability
  • 2006
  • In: Real-time systems. - : Springer Science and Business Media LLC. - 0922-6443 .- 1573-1383. ; 32:3, s. 253-277
  • Journal article (peer-reviewed)abstract
    • The execution time of software for hard real-time systems must be predictable. Further, safe and not overly pessimistic bounds for the worst-case execution time (WCET) must be computable. We conceived a programming strategy called WCET-oriented programming and a code transformation strategy, the single-path conversion, that aid programmers in producing code that meets these requirements. These strategies avoid and eliminate input-data dependencies in the code. The paper describes the formal analysis, based on abstract interpretation, that identifies input-data dependencies in the code and thus forms the basis for the strategies provided for hard real-time code development.
  •  
41.
  • Gustafsson, Jan, et al. (author)
  • The Mälardalen WCET Benchmarks - Past, Present and Future
  • 2010
  • In: OpenAccess Series in Informatics Volume 15, 2010. - 9783939897217 ; , s. 136-146
  • Conference paper (peer-reviewed)abstract
    • Modelling of real-time systems requires accurate and tight estimates of the Worst-Case Execution Time (WCET) of each task scheduled to run. In the past two decades, two main paradigms have emerged within the field of WCET analysis: static analysis and hybrid measurement-based analysis. These techniques have been successfully implemented in prototype and commercial toolsets. Yet, comparison among the WCET estimates derived by such tools remains somewhat elusive as it requires a common set of benchmarks which serve a multitude of needs. The Mälardalen WCET research group maintains a large number of WCET benchmark programs for this purpose. This paper describes properties of the existing benchmarks, including their relative strengths and weaknesses. We propose extensions to the benchmarks which will allow any type of WCET tool evaluate its results against other state-of-the-art tools, thus setting a high standard for future research and development. We also propose an organization supporting the future work with the benchmarks. We suggest to form a committee with a responsibility for the benchmarks, and that the benchmark web site is transformed to an open wiki, with possibility for the WCET community to easily update the benchmarks.
  •  
42.
  • Gustavsson, Andreas, 1982- (author)
  • Static Execution Time Analysis of Parallel Systems
  • 2016
  • Doctoral thesis (other academic/artistic)abstract
    • The past trend of increasing processor throughput by increasing the clock frequency and the instruction level parallelism is no longer feasible due to extensive power consumption and heat dissipation. Therefore, the current trend in computer hardware design is to expose explicit parallelism to the software level. This is most often done using multiple, relatively slow and simple, processing cores situated on a single processor chip. The cores usually share some resources on the chip, such as some level of cache memory (which means that they also share the interconnect, e.g., a bus, to that memory and also all higher levels of memory). To fully exploit this type of parallel processor chip, programs running on it will have to be concurrent. Since multi-core processors are the new standard, even embedded real-time systems will (and some already do) incorporate this kind of processor and concurrent code.A real-time system is any system whose correctness is dependent both on its functional and temporal behavior. For some real-time systems, a failure to meet the temporal requirements can have catastrophic consequences. Therefore, it is crucial that methods to derive safe estimations on the timing properties of parallel computer systems are developed, if at all possible.This thesis presents a method to derive safe (lower and upper) bounds on the execution time of a given parallel system, thus showing that such methods must exist. The interface to the method is a small concurrent programming language, based on communicating and synchronizing threads, that is formally (syntactically and semantically) defined in the thesis. The method is based on abstract execution, which is itself based on abstract interpretation techniques that have been commonly used within the field of timing analysis of single-core computer systems, to derive safe timing bounds in an efficient (although, over-approximative) way. The thesis also proves the soundness of the presented method (i.e., that the estimated timing bounds are indeed safe) and evaluates a prototype implementation of it.
  •  
43.
  • Gustavsson, Andreas, 1982- (author)
  • Static Timing Analysis of Parallel Systems Using Abstract Execution
  • 2014
  • Licentiate thesis (other academic/artistic)abstract
    • The Power Wall has stopped the past trend of increasing processor throughput by increasing the clock frequency and the instruction level parallelism.Therefore, the current trend in computer hardware design is to expose explicit parallelism to the software level.This is most often done using multiple processing cores situated on a single processor chip.The cores usually share some resources on the chip, such as some level of cache memory (which means that they also share the interconnect, e.g. a bus, to that memory and also all higher levels of memory), and to fully exploit this type of parallel processor chip, programs running on it will have to be concurrent.Since multi-core processors are the new standard, even embedded real-time systems will (and some already do) incorporate this kind of processor and concurrent code.A real-time system is any system whose correctness is dependent both on its functional and temporal output. For some real-time systems, a failure to meet the temporal requirements can have catastrophic consequences. Therefore, it is of utmost importance that methods to analyze and derive safe estimations on the timing properties of parallel computer systems are developed.This thesis presents an analysis that derives safe (lower and upper) bounds on the execution time of a given parallel system.The interface to the analysis is a small concurrent programming language, based on communicating and synchronizing threads, that is formally (syntactically and semantically) defined in the thesis.The analysis is based on abstract execution, which is itself based on abstract interpretation techniques that have been commonly used within the field of timing analysis of single-core computer systems, to derive safe timing bounds in an efficient (although, over-approximative) way.Basically, abstract execution simulates the execution of several real executions of the analyzed program in one go.The thesis also proves the soundness of the presented analysis (i.e. that the estimated timing bounds are indeed safe) and includes some examples, each showing different features or characteristics of the analysis.
  •  
44.
  • Gustavsson, Andreas, 1982-, et al. (author)
  • Timing Analysis of Parallel Software Using Abstract Execution
  • 2014
  • In: VERIFICATION, MODEL CHECKING, AND ABSTRACT INTERPRETATION. - Berlin, Heidelberg : SPRINGER-VERLAG BERLIN. - 9783642540134 ; , s. 59-77, s. 59-77
  • Conference paper (peer-reviewed)abstract
    • A major trend in computer architecture is multi-core processors. To fully exploit this type of parallel processor chip, programs running on it will have to be parallel as well. This means that even hard real-time embedded systems will be parallel. Therefore, it is of utmost importance that methods to analyze the timing properties of parallel real-time systems are developed. This paper presents an algorithm that is founded on abstract interpretation and derives safe approximations of the execution times of parallel programs. The algorithm is formulated and proven correct for a simple parallel language with parallel threads, shared memory and synchronization via locks.
  •  
45.
  • Gustavsson, Andreas, et al. (author)
  • Toward Static Timing Analysis of Parallel Software
  • 2012
  • In: Proc. 12th International Workshop on Worst-Case Execution-Time Analysis (WCET'12). - 9783939897415 ; , s. 38-47
  • Conference paper (peer-reviewed)abstract
    • The current trend within computer, and even real-time, systems is to incorporate parallel hardware, e.g., multicore processors, and parallel software. Thus, the ability to safely analyse such parallel systems, e.g., regarding the timing behaviour, becomes necessary. Static timing analysis is an approach to mathematically derive safe bounds on the execution time of a program, when executed on a given hardware platform. This paper presents an algorithm that statically analyses the timing of parallel software, with threads communicating through shared memory, using abstract interpretation. It also gives an extensive example to clarify how the algorithm works.
  •  
46.
  • Gustavsson, Andreas, et al. (author)
  • Towards WCET Analysis of Multicore Architectures using UPPAAL
  • 2010
  • In: OpenAccess Series in Informatics, vol. 15, 2010. - 9783939897217 ; , s. 101-112
  • Conference paper (peer-reviewed)abstract
    • To take full advantage of the increasingly used shared-memory multicore architectures, software algorithms will need to be parallelized over multiple threads. This means that threads will have to share resources (e.g. some level of cache) and communicate and synchronize with each other. There already exist software libraries (e.g. OpenMP) used to explicitly parallelize available sequential C/C++ and Fortran code, which means that parallel code could be easily obtained. To be able to use parallel software running on multicore architectures in embedded systems with hard real-time constraints, new WCET (Worst-Case Execution Time) analysis methods and tools must be developed. This paper investigates a method based on model-checking a system of timed automata using the UPPAAL tool box. It is found that it is possible to perform WCET analysis on (not too large and complex) parallel systems using UPPAAL. We also show how to model thread synchronization using spinlock-like primitives.
  •  
47.
  •  
48.
  • Helali Moghadam, Mahshid, et al. (author)
  • Adaptive Runtime Response Time Control in PLC-based Real-Time Systems using Reinforcement Learning
  • 2018
  • In: ACM/IEEE 13th International Symposium on Software Engineering for Adaptive and Self-Managing Systems, SEAMS 2018, , co-located with International Conference on Software Engineering, ICSE 2018; Gothenburg; Sweden; 28 May 2018 through 29 May 2018; Code 138312. - New York, NY, USA : ACM. ; , s. 217-223
  • Conference paper (peer-reviewed)abstract
    • Timing requirements such as constraints on response time are key characteristics of real-time systems and violations of these requirements might cause a total failure, particularly in hard real-time systems. Runtime monitoring of the system properties is of great importance to detect and mitigate such failures. Thus, a runtime control to preserve the system properties could improve the robustness of the system with respect to timing violations. Common control approaches may require a precise analytical model of the system which is difficult to be provided at design time. Reinforcement learning is a promising technique to provide adaptive model-free control when the environment is stochastic, and the control problem could be formulated as a Markov Decision Process. In this paper, we propose an adaptive runtime control using reinforcement learning for real-time programs based on Programmable Logic Controllers (PLCs), to meet the response time requirements. We demonstrate through multiple experiments that our approach could control the response time efficiently to satisfy the timing requirements.
  •  
49.
  • Helali Moghadam, Mahshid, et al. (author)
  • An autonomous performance testing framework using self-adaptive fuzzy reinforcement learning
  • 2022
  • In: Software quality journal. - : Springer. - 0963-9314 .- 1573-1367. ; , s. 127-159
  • Journal article (peer-reviewed)abstract
    • Test automation brings the potential to reduce costs and human effort, but several aspects of software testing remain challenging to automate. One such example is automated performance testing to find performance breaking points. Current approaches to tackle automated generation of performance test cases mainly involve using source code or system model analysis or use-case-based techniques. However, source code and system models might not always be available at testing time. On the other hand, if the optimal performance testing policy for the intended objective in a testing process instead could be learned by the testing system, then test automation without advanced performance models could be possible. Furthermore, the learned policy could later be reused for similar software systems under test, thus leading to higher test efficiency. We propose SaFReL, a self-adaptive fuzzy reinforcement learning-based performance testing framework. SaFReL learns the optimal policy to generate performance test cases through an initial learning phase, then reuses it during a transfer learning phase, while keeping the learning running and updating the policy in the long term. Through multiple experiments in a simulated performance testing setup, we demonstrate that our approach generates the target performance test cases for different programs more efficiently than a typical testing process and performs adaptively without access to source code and performance models. © 2021, The Author(s).
  •  
50.
  • Helali Moghadam, Mahshid (author)
  • Intelligence-Driven Software Performance Assurance
  • 2022
  • Doctoral thesis (other academic/artistic)abstract
    • Software performance assurance is of great importance for the success of software products, which are nowadays involved in many parts of our life. Performance evaluation approaches such as performance modeling, testing, as well as runtime performance control methods, all can contribute to the realization of software performance assurance. Many of the common approaches to tackle challenges in this area involve relying on performance models or using system models and source code. Although modeling provides a deep insight into the system behavior, developing a  detailed model is challenging.  Furthermore, software artifacts such as models and source code might not be readily available at all times in the development lifecycle. This thesis focuses on leveraging the potential of machine learning (ML) and evolutionary search-based techniques to provide viable solutions for addressing the challenges in different aspects of software performance assurance efficiently and effectively.In this thesis, we first investigate the capabilities of model-free reinforcement learning to address the objectives in robustness testing problems. We develop two self-adaptive reinforcement learning-driven test agents called SaFReL and RELOAD. They generate effective platform-based test scenarios and test workloads, respectively. The output scenarios and workloads help testers and software engineers meet their objectives efficiently without relying on models or source code. SaFReL and RELOAD learn the optimal policies (ways) to meet the test objectives and can reuse the learned policies adaptively in other testing settings. Policy reuse can lead to higher test efficiency and cost savings, for example, when testing similar test objectives or software systems with comparable performance sensitivity.Next, we leverage the potential of evolutionary computation algorithms, i.e., genetic algorithms, evolution strategies, and particle swarm optimization, to generate failure-revealing test scenarios for robustness testing of AI systems. In this part, we choose autonomous driving systems as a prevailing example of contemporary AI systems. We study the efficacy of the proposed evolutionary search-based test generation techniques and evaluate primarily to what extent they can trigger failures. Moreover, we investigate the diversity of those failures and compare them to existing baseline solutions. Finally, we again use the potential of model-free reinforcement learning to develop adaptive ML-driven runtime performance control approaches. We present a response time preservation method for a sample type of industrial applications and a resource allocation technique for dynamic workloads in a data grid application. The proposed ML-driven techniques learn how to adjust the tunable parameters and resource configuration at runtime to keep the performance continually compliant with the requirements and to further optimize the runtime performance. We evaluate the efficacy of the approaches and show how effectively they can improve the performance and keep the performance requirements satisfied under varying conditions such as dynamic workloads and the occurrence of runtime events that lead to substantial response time deviations.
  •  
Skapa referenser, mejla, bekava och länka
  • Result 1-50 of 123
Type of publication
conference paper (65)
journal article (16)
doctoral thesis (16)
reports (11)
licentiate thesis (7)
book chapter (5)
show more...
other publication (3)
show less...
Type of content
peer-reviewed (86)
other academic/artistic (37)
Author/Editor
Lisper, Björn (107)
Ermedahl, Andreas (24)
Gustafsson, Jan (22)
Lisper, Björn, Profe ... (14)
Helali Moghadam, Mah ... (12)
Borg, Markus (12)
show more...
Bohlin, Markus, 1976 ... (8)
Saadatmand, Mehrdad, ... (7)
Nolte, Thomas (7)
Daneshtalab, Masoud (7)
Bygde, Stefan (7)
Saadatmand, Mehrdad (6)
Masud, Abu Naser (6)
Bohlin, Markus (6)
Eldh, Sigrid (5)
Carlson, Jan (5)
Sandberg, Christer (5)
Sjödin, Mikael, 1971 ... (5)
Holsti, Niklas (5)
Khanfar, Husni (5)
Faragardi, Hamid Rez ... (4)
Källberg, Linus (4)
Jägemar, Marcus, 197 ... (4)
Ciccozzi, Federico, ... (3)
Altenbernd, Peter (3)
Sandström, Kristian (3)
Dobrin, Radu, 1970- (3)
Gustavsson, Andreas, ... (3)
Santos, Marcelo (3)
Eldh, S. (2)
Mubeen, Saad (2)
Addazi, Lorenzo (2)
Gustavsson, Andreas (2)
Pettersson, Paul (2)
Raik, J. (2)
Altmeyer, S. (2)
Rochange, C. (2)
Sjodin, M. (2)
Knoop, Jens (2)
Nordlander, Johan (2)
Bygde, Stefan, 1980- (2)
Gustafsson, Jan, Doc ... (2)
Potena, Pasqualina (2)
Jägemar, Marcus (2)
Kirner, Raimund (2)
Gustafsson, Jan, Sen ... (2)
Ermedahl, Andreas, S ... (2)
Hamidi, Golrokh (2)
Lindhult, Johan (2)
Schreiner, Dietmar (2)
show less...
University
Mälardalen University (115)
RISE (15)
Royal Institute of Technology (4)
Luleå University of Technology (2)
Uppsala University (1)
Linköping University (1)
show more...
Mid Sweden University (1)
University of Skövde (1)
Chalmers University of Technology (1)
Blekinge Institute of Technology (1)
show less...
Language
English (123)
Research subject (UKÄ/SCB)
Engineering and Technology (57)
Natural sciences (50)
Humanities (1)

Year

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Close

Copy and save the link in order to return to this view