SwePub
Sök i LIBRIS databas

  Extended search

WFRF:(McKee Sally A 1963)
 

Search: WFRF:(McKee Sally A 1963) > (2010-2014) > Parallelizing more ...

Parallelizing more loops with compiler guided refactoring

Larsen, P (author)
Danmarks Tekniske Universitet,Technical University of Denmark
Ladelsky, R. (author)
IBM Haifa Labs
Lidman, Jacob, 1985 (author)
Chalmers tekniska högskola,Chalmers University of Technology
show more...
McKee, Sally A, 1963 (author)
Chalmers tekniska högskola,Chalmers University of Technology
Karlsson, S (author)
Danmarks Tekniske Universitet,Technical University of Denmark
Zaks, A. (author)
show less...
 (creator_code:org_t)
ISBN 9780769547961
2012
2012
English.
In: Proceedings of the International Conference on Parallel Processing. 41st International Conference on Parallel Processing, ICPP 2012, Pittsburgh, PA, 10 - 13 September 2012. - 0190-3918. - 9780769547961 ; , s. 410-419
  • Conference paper (peer-reviewed)
Abstract Subject headings
Close  
  • The performance of many parallel applications relies not on instruction-level parallelism but on loop-level parallelism. Unfortunately, automatic parallelization of loops is a fragile process, many different obstacles affect or prevent it in practice. To address this predicament we developed an interactive compilation feedback system that guides programmers in iteratively modifying their application source code. This helps leverage the compiler's ability to generate loop-parallel code. We employ our system to modify two sequential benchmarks dealing with image processing and edge detection, resulting in scalable parallelized code that runs up to 8.3 times faster on an eight-core Intel Xeon 5570 system and up to 12.5 times faster on a quad-core IBM POWER6 system. Benchmark performance varies significantly between the systems. This suggests that semi-automatic parallelization should be combined with target-specific optimizations. Furthermore, comparing the first benchmark to manually-parallelized, hand-optimized pthreads and OpenMP versions, we find that code generated using our approach typically outperforms the pthreads code (within 93-339%). It also performs competitively against the OpenMP code (within 75-111%). The second benchmark outperforms manually-parallelized and optimized OpenMP code (within 109-242%).

Subject headings

NATURVETENSKAP  -- Data- och informationsvetenskap (hsv//swe)
NATURAL SCIENCES  -- Computer and Information Sciences (hsv//eng)

Keyword

Refactoring
Compiler Feedback
Automatic Loop Parallelization

Publication and Content Type

kon (subject category)
ref (subject category)

Find in a library

To the university's database

Search outside SwePub

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Close

Copy and save the link in order to return to this view