Sökning: id:"swepub:oai:DiVA.org:kth-213532" >
Performance study o...
Performance study of multithreaded MPI and Openmp tasking in a large scientific code
-
- Akhmetova, Dana (författare)
- KTH,Beräkningsvetenskap och beräkningsteknik (CST)
-
- Iakymchuk, Roman (författare)
- KTH,Beräkningsvetenskap och beräkningsteknik (CST)
-
- Ekeberg, Örjan (författare)
- KTH,Beräkningsvetenskap och beräkningsteknik (CST)
-
visa fler...
-
- Laure, Erwin (författare)
- KTH,Beräkningsvetenskap och beräkningsteknik (CST)
-
visa färre...
-
(creator_code:org_t)
- Institute of Electrical and Electronics Engineers (IEEE), 2017
- 2017
- Engelska.
-
Ingår i: Proceedings - 2017 IEEE 31st International Parallel and Distributed Processing Symposium Workshops, IPDPSW 2017. - : Institute of Electrical and Electronics Engineers (IEEE). - 9781538634080 ; , s. 756-765
- Relaterad länk:
-
https://kth.diva-por... (primary) (Raw object)
-
visa fler...
-
https://urn.kb.se/re...
-
https://doi.org/10.1...
-
visa färre...
Abstract
Ämnesord
Stäng
- With a large variety and complexity of existing HPC machines and uncertainty regarding exact future Exascale hardware, it is not clear whether existing parallel scientific codes will perform well on future Exascale systems: they can be largely modified or even completely rewritten from scratch. Therefore, now it is important to ensure that software is ready for Exascale computing and will utilize all Exascale resources well. Many parallel programming models try to take into account all possible hardware features and nuances. However, the HPC community does not yet have a precise answer whether, for Exascale computing, there should be a natural evolution of existing models interoperable with each other or it should be a disruptive approach. Here, we focus on the first option, particularly on a practical assessment of how some parallel programming models can coexist with each other. This work describes two API combination scenarios on the example of iPIC3D [26], an implicit Particle-in-Cell code for space weather applications written in C++ and MPI plus OpenMP. The first scenario is to enable multiple OpenMP threads call MPI functions simultaneously, with no restrictions, using an MPI THREAD MULTIPLE thread safety level. The second scenario is to utilize the OpenMP tasking model on top of the first scenario. The paper reports a step-by-step methodology and experience with these API combinations in iPIC3D; provides the scaling tests for these implementations with up to 2048 physical cores; discusses occurred interoperability issues; and provides suggestions to programmers and scientists who may adopt these API combinations in their own codes.
Ämnesord
- TEKNIK OCH TEKNOLOGIER -- Elektroteknik och elektronik -- Datorsystem (hsv//swe)
- ENGINEERING AND TECHNOLOGY -- Electrical Engineering, Electronic Engineering, Information Engineering -- Computer Systems (hsv//eng)
Nyckelord
- API interoperability
- Exascale
- MPI-THREAD-MULTIPLE thread safety
- multithreaded MPI
- OpenMP tasks
- performance
- programming models
Publikations- och innehållstyp
- ref (ämneskategori)
- kon (ämneskategori)
Hitta via bibliotek
Till lärosätets databas