SwePub
Sök i SwePub databas

  Extended search

Träfflista för sökning "WFRF:(Lundqvist Thomas 1967 ) "

Search: WFRF:(Lundqvist Thomas 1967 )

  • Result 1-10 of 25
Sort/group result
   
EnumerationReferenceCoverFind
1.
  • Amoson, Jonas, 1973-, et al. (author)
  • A light-weigh non-hierarchical file system navigation extension
  • 2012
  • In: Proceedings of the 7th International Workshop on Plan 9. - Dublin, Ireland. ; , s. 11-13
  • Conference paper (other academic/artistic)abstract
    • Drawbacks in organising and finding files in hierarchies have led researchers to explorenon-hierarchical and search-based filesystems, where file identity and belonging is pred-icated by tagging files to categories. We have implemented a chdir() shell extension en-abling navigation to a directory using a search expression. Our extension is light-weightand avoids modifying the file system to guarantee backwards compatibility for applicationsrelying on normal hierarchical file namespaces.
  •  
2.
  • Andersson, H. Robert H., et al. (author)
  • Flipping the Data Center Network : Increasing East-West Capacity Using Existing Hardware
  • 2017
  • In: 2017 IEEE 42nd Conference on Local Computer Networks (LCN), 9-12 Oct. 2017. - : IEEE. - 9781509065233 - 9781509065226 ; , s. 211-214
  • Conference paper (peer-reviewed)abstract
    • In today's datacenters, there is an increasing demand for more network traffic capacity. The majority of the increase in traffic is internal to the datacenter, i.e., it flows between different servers within the datacenter. This category of traffic is often referred to as east-west traffic and traditional hierarchical architectures are not well equipped to handle this type of traffic. Instead, they are better suited for the north-southbound traffic between hosts and the Internet. One suggested solution for this capacity problem is to adopt a folded CLOS topology, also known as spine-leaf, which often relies on software defined network (SDN) controllers to manage traffic. This paper shows that it is possible to implement a spine-leaf network using commodity-ofthe-shelf switches and thus improve the east-west traffic capacity. This can be obtained using low complexity configuration and edgerouting for load balancing, eliminating the need for a centralized SDN controller.
  •  
3.
  • de Blanche, Andreas, 1975-, et al. (author)
  • A methodology for estimating co-scheduling slowdowns due to memory bus contention on multicore nodes
  • 2014
  • In: Proceedings of the IASTED International Conference on Parallel and Distributed Computing and Networks, PDCN 2014. - : ACTA Press. - 9780889869677 - 9780889869653 ; , s. 216-223
  • Conference paper (peer-reviewed)abstract
    • When two or more programs are co-scheduled on the same multicore computer they might experience a slowdown due to the limited off-chip memory bandwidth. According to our measurements, this slowdown does not depend on the total bandwidth use in a simple way. One thing we observe is that a higher memory bandwidth usage will not always lead to a larger slowdown. This means that relying on bandwidth usage as input to a job scheduler might cause non-optimal scheduling of processes on multicore nodes in clusters, clouds, and grids. To guide scheduling decisions, we instead propose a slowdown based characterization approach. Real slowdowns are complex to measure due to the exponential number of experiments needed. Thus, we present a novel method for estimating the slowdown programs will experience when co-scheduled on the same computer. We evaluate the method by comparing the predictions made with real slowdown data and the often used memory bandwidth based method. This study show that a scheduler relying on slowdown based categorization makes fewer incorrect co-scheduling choices and the negative impact on program execution times is less than when using a bandwidth based categorization method.
  •  
4.
  • de Blanche, Andreas, 1975-, et al. (author)
  • Addressing characterization methods for memory contention aware co-scheduling
  • 2015
  • In: Journal of Supercomputing. - : Springer Science and Business Media LLC. - 0920-8542 .- 1573-0484. ; 71:4, s. 1451-1483
  • Journal article (peer-reviewed)abstract
    • The ability to precisely predict how memory contention degrades performance when co-scheduling programs is critical for reaching high performance levels in cluster, grid and cloud environments. In this paper we present an overview and compare the performance of state-of-the-art characterization methods for memory aware (co-)scheduling. We evaluate the prediction accuracy and co-scheduling performance of four methods: one slowdown-based, two cache-contention based and one based on memory bandwidth usage. Both our regression analysis and scheduling simulations find that the slowdown based method, represented by Memgen, performs better than the other methods. The linear correlation coefficient (Formula presented.) of Memgen's prediction is 0.890. Memgen's preferred schedules reached 99.53 % of the obtainable performance on average. Also, the memory bandwidth usage method performed almost as well as the slowdown based method. Furthermore, while most prior work promote characterization based on cache miss rate we found it to be on par with random scheduling of programs and highly unreliable.
  •  
5.
  • de Blanche, Andreas, 1975-, et al. (author)
  • Node Sharing for Increased Throughput and Shorter Runtimes : an Industrial Co-Scheduling Case Study
  • 2018
  • In: Proceedings of the 3rd Workshop on Co-Scheduling of HPC Applications (COSH 2018). ; , s. 15-20
  • Conference paper (peer-reviewed)abstract
    • The allocation of jobs to nodes and cores in industrial clusters is often based on queue-system standard settings, guesses or perceived fairness between different users and projects. Unfortunately, hard empirical data is often lacking and jobs are scheduled and co-scheduled for no apparent reason. In this case-study, we evaluate the performance impact of co-scheduling jobs using three types of applications and an existing 450+ node cluster at a company doing large-scale parallel industrial simulations. We measure the speedup when co-scheduling two applications together, sharing two nodes, compared to running the applications on separate nodes. Our results and analyses show that by enabling co-scheduling we improve performance in the order of 20% both in throughput and in execution times, and improve the execution times even more if the cluster is running with low utilization. We also find that a simple reconfiguration of the number of threads used in one of the applications can lead to a performance increase of 35-48% showing that there is a potentially large performance increase to gain by changing current practice in industry.
  •  
6.
  • Loconsole, Annabella, et al. (author)
  • Comparing the CDIO educational framework with University West’s WIL certification: do they complement each other?
  • 2021
  • In: VILÄR. - 9789189325036 ; , s. 15-16
  • Conference paper (other academic/artistic)abstract
    • Higher education institutions (HEIs) need to continuously improve their quality to prepare the students to the society of the 21st Century. They need to develop efficient ways of collaborating with various partners in the surrounding community. Close ties with business and industry, and diversity among staff and students are necessary, especially within engineering education. An engineering degree should prepare students to develop a wide range of knowledge and skills. These range from technical, scientific, and mathematical knowledge but also soft skills such as teamwork, business skills and critical analysis, which are also central sustainability competences. It is vital that learning for engineers takes place in the context of authentic engineering problems and processes to develop these skills and to put theory into practice. Several initiatives focused on incorporating these skills in higher education exists. CDIO (Conceive, Design, Implement, Operate) is one of the most prominent initiatives within engineering education. CDIO targets the typical tasks an engineer performs when bringing new systems, products and services to the market or the society. The CDIO initiative was created to strengthen active and problem-based learning and improving students' communication and professional skills. CDIO focus on improving practical and work-related skills to better prepare engineering students for their future professional life.University West employs another initiative, Arbetsintegrerat lärande (AIL), which “roughly” translates to Work Integrated Learning (WIL). WIL shares much of the same philosophy as CDIO. All programs at University West are currently undergoing an AIL-certification process. For engineering programs, that have been working with CDIO, it is interesting to compare them. It is currently unclear how they differ. In this study we compare the CDIO educational framework with the WIL-certification through a series of workshops to identify in which areas they overlap and which areas they differ. Would a program that has adopted the CDIO educational framework automatically fulfill the WIL-certification? 
  •  
7.
  • Lundmark, Elias, et al. (author)
  • Increasing Throughput of Multiprogram HPC Workloads : Evaluating a SMT Co-Scheduling Approach
  • 2017
  • Conference paper (peer-reviewed)abstract
    • Simultaneous Multithreading (SMT) is a technique that allows formore efficient processor utilization by scheduling multiple threadson a single physical core. Previous research have shown an averagethroughput increase of around 20% with an SMT level of two, e.g.two threads per core. However, a bad combination of threads canactually result in decreased performance. To be conservative, manyHPC-systems have SMT disabled, thus, limiting the number ofscheduling slots in the system to one per core. However, for SMT tonot hurt performance, we need to determine which threads shouldshare a core. In this poster, we use 30 random SPEC CPU job mixedon a twelve-core Broadwell based node, to study the impact ofenabling SMT using two different co-scheduling strategies. Theresults show that SMT can increase performance especially whenusing no-same-program co-scheduling.
  •  
8.
  • Lundqvist, Lars-Olov, 1958-, et al. (author)
  • Risk markers for not returning to work among people with acquired brain injury
  • 2019
  • Conference paper (peer-reviewed)abstract
    • BACKGROUND: Research shows that variety of factors are related to risks of not returning to work among people with acquired brain injury (ABI). In Sweden, 40% of those with ABI in working age return to work within two years after the injury, which in line with international findings. However, since countries may differ in work rehabilitation, social security systems, culture and laws, different factors may influence the possibilities of returning to work across countries.AIMS: The aim of this study was to investigate person, injury, activity and rehabilitation related risk markers for not return to work among persons with ABI in Sweden.METHODS: Retrospective data of an ABI cohort of 2008 people from the WebRehab Sweden quality register were used.RESULTS: Analyses showed that the risk ratio for not returning to work was larger for people that, among the Personal factors, were woman, born outside of Sweden, had low education level, and not having children in the household; among the injury related factors, had long (> 2 months) hospital stay, aphasia, low motor function, low cognitive function, high pain/discomfort, and high anxiety/depression; among the activity related factors, had low function in self-care, inability to perform usual activities, and had their driver´s license suspended; and finally among the rehabilitation related factors, were satisfied with treatment and having influence over their rehabilitation plan.DISCUSSION / CONCLUSION: Several factors in different areas were risk markers for not returning to work among people with ABI. This suggest that work rehabilitation and interventions, in addition to direct injury related issues, need to address personal related, activity related and rehabilitation related factors in order to increase the patient´s possibility to return to work. Influences of general and country specific factors on returning to work among people with ABI will be discussed.
  •  
9.
  •  
10.
  • Lundqvist, Thomas, 1967 (author)
  • A WCET Analysis Method for Pipelined Microprocessors with Cache Memories
  • 2002
  • Doctoral thesis (other academic/artistic)abstract
    • When constructing real-time systems, safe and tight estimations of the worst case execution time (WCET) of programs are needed. To obtain tight estimations, a common approach is to do path and timing analyses. Path analysis is responsible for eliminating infeasible paths in the program and timing analysis is responsible for accurately modeling the timing behavior of programs. The focus of this thesis is on analysis of programs running on high-performance microprocessors employing pipelining and caching. This thesis presents a new method, referred to as cycle-level symbolic execution, that tightly integrates path and timing analysis. An implementation of the method has been used to estimate the WCET for a suite of programs running on a high-performance processor. The results show that by using an integrated analysis, the overestimation is significantly reduced compared to other methods. The method automatically eliminates infeasible paths and derives path information such as loop bounds, and performs accurate timing analysis for a multiple-issue processor with an instruction and data cache. The thesis also identifies timing anomalies in dynamically scheduled processors. These anomalies can lead to unbounded timing effects when estimating the WCET, which makes it unsafe to use previously presented timing analysis methods. To handle these unbounded timing effects, two methods are proposed. The first method is based on program modifications and the second method relies on using pessimistic timing models. Both methods make it possible to safely use all previously published timing analysis methods even for architectures where timing anomalies can occur. Finally, the use of data caching is examined. For data caching to be fruitful in real-time systems, data accesses must be predictable when estimating the WCET. Based on a notion of predictable and unpredictable data structures, it is shown how to classify program data structures according to their influence on data cache analysis. For both categories, several examples of frequently used types of data structures are provided. Furthermore, it is shown how to make an efficient data cache analysis even when data structures have an unknown placement in memory. This is important, for example, when analyzing single subroutines of a program.
  •  
Skapa referenser, mejla, bekava och länka
  • Result 1-10 of 25

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Close

Copy and save the link in order to return to this view