SwePub
Sök i LIBRIS databas

  Utökad sökning

WFRF:(de Groot Bert L.)
 

Sökning: WFRF:(de Groot Bert L.) > Speeding up paralle...

Speeding up parallel GROMACS on high-latency networks

Kutzner, Carsten (författare)
van der Spoel, David (författare)
Uppsala universitet,Institutionen för cell- och molekylärbiologi,Van der Spoel
Fechner, Martin (författare)
visa fler...
Lindahl, Erik (författare)
Schmitt, Udo W. (författare)
de Groot, Bert L. (författare)
Grubmüller, Helmut (författare)
visa färre...
 (creator_code:org_t)
2007
2007
Engelska.
Ingår i: Journal of Computational Chemistry. - : Wiley. - 0192-8651 .- 1096-987X. ; 28:12, s. 2075-2084
  • Tidskriftsartikel (refereegranskat)
Abstract Ämnesord
Stäng  
  • We investigate the parallel scaling of the GROMACS molecular dynamics code on Ethernet Beowulf clusters and what prerequisites are necessary for decent scaling even on such clusters with only limited bandwidth and high latency. GROMACS 3.3 scales well on supercomputers like the IBM p690 (Regatta) and on Linux clusters with a special interconnect like Myrinet or Infiniband. Because of the high single-node performance of GROMACS, however, on the widely used Ethernet switched clusters, the scaling typically breaks down when more than two computer nodes are involved, limiting the absolute speedup that can be gained to about 3 relative to a single-CPU run. With the LAM MPI implementation, the main scaling bottleneck is here identified to be the all-to-all communication which is required every time step. During such an all-to-all communication step, a huge amount of messages floods the network, and as a result many TCP packets are lost. We show that Ethernet flow control prevents network congestion and leads to substantial scaling improvements. For 16 CPUs, e.g., a speedup of 11 has been achieved. However, for more nodes this mechanism also fails. Having optimized an all-to-all routine, which sends the data in an ordered fashion, we show that it is possible to completely prevent packet loss for any number of multi-CPU nodes. Thus, the GROMACS scaling dramatically improves, even for switches that lack flow control. In addition, for the common HP ProCurve 2848 switch we find that for optimum all-to-all performance it is essential how the nodes are connected to the switch's ports. This is also demonstrated for the example of the Car-Parinello MD code.

Ämnesord

NATURVETENSKAP  -- Kemi (hsv//swe)
NATURAL SCIENCES  -- Chemical Sciences (hsv//eng)
NATURVETENSKAP  -- Biologi (hsv//swe)
NATURAL SCIENCES  -- Biological Sciences (hsv//eng)
NATURVETENSKAP  -- Data- och informationsvetenskap (hsv//swe)
NATURAL SCIENCES  -- Computer and Information Sciences (hsv//eng)

Nyckelord

GROMACS parallel molecular dynamics
Car-Parrinello MD
Ethernet flow control
MPI_Alltoall
network congestion
Chemistry
Kemi
Biology
Biologi
Information technology
Informationsteknik

Publikations- och innehållstyp

ref (ämneskategori)
art (ämneskategori)

Hitta via bibliotek

Till lärosätets databas

Sök utanför SwePub

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy