SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(de Groot Bert L.) srt2:(2007-2009)"

Sökning: WFRF:(de Groot Bert L.) > (2007-2009)

  • Resultat 1-3 av 3
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Fischer, Gerhard, 1978, et al. (författare)
  • Crystal structure of a yeast aquaporin at 1.15 angstrom reveals a novel gating mechanism.
  • 2009
  • Ingår i: PLoS biology. - : Public Library of Science (PLoS). - 1545-7885 .- 1544-9173. ; 7:6
  • Tidskriftsartikel (refereegranskat)abstract
    • Aquaporins are transmembrane proteins that facilitate the flow of water through cellular membranes. An unusual characteristic of yeast aquaporins is that they frequently contain an extended N terminus of unknown function. Here we present the X-ray structure of the yeast aquaporin Aqy1 from Pichia pastoris at 1.15 A resolution. Our crystal structure reveals that the water channel is closed by the N terminus, which arranges as a tightly wound helical bundle, with Tyr31 forming H-bond interactions to a water molecule within the pore and thereby occluding the channel entrance. Nevertheless, functional assays show that Aqy1 has appreciable water transport activity that aids survival during rapid freezing of P. pastoris. These findings establish that Aqy1 is a gated water channel. Mutational studies in combination with molecular dynamics simulations imply that gating may be regulated by a combination of phosphorylation and mechanosensitivity.
  •  
2.
  • Kutzner, Carsten, et al. (författare)
  • Software news and update : Speeding up parallel GROMACS on high-latency networks
  • 2007
  • Ingår i: Journal of Computational Chemistry. - : Wiley. - 0192-8651 .- 1096-987X. ; 28:12, s. 2075-84
  • Tidskriftsartikel (refereegranskat)abstract
    • We investigate the parallel scaling of the GROMACS molecular dynamics code on Ethernet Beowulf clusters and what prerequisites are necessary for decent scaling even on such clusters with only limited bandwidth and high latency. GROMACS 3.3 scales well on supercomputers like the IBM p690 (Regatta) and on Linux clusters with a special interconnect like Myrinet or Infiniband. Because of the high single-node performance of GROMACS, however, on the widely used Ethernet switched clusters, the scaling typically breaks down when more than two computer nodes are involved, limiting the absolute speedup that can be gained to about 3 relative to a single-CPU run. With the LAM MPI implementation, the main scaling bottleneck is here identified to be the all-to-all communication which is required every time step. During such an all-to-all communication step, a huge amount of messages floods the network, and as a result many TCP packets are lost. We show that Ethernet flow control prevents network congestion and leads to substantial scaling improvements. For 16 CPUs, e.g., a speedup of 11 has been achieved. However, for more nodes this mechanism also fails. Having optimized an all-to-all routine, which sends the data in an ordered fashion, we show that it is possible to completely prevent packet loss for any number of multi-CPU nodes. Thus, the GROMACS scaling dramatically improves, even for switches that lack flow control. In addition, for the common HP ProCurve 2848 switch we find that for optimum all-to-all performance it is essential how the nodes are connected to the switch's ports. This is also demonstrated for the example of the Car-Parinello MD code.
  •  
3.
  • Kutzner, Carsten, et al. (författare)
  • Speeding up parallel GROMACS on high-latency networks
  • 2007
  • Ingår i: Journal of Computational Chemistry. - : Wiley. - 0192-8651 .- 1096-987X. ; 28:12, s. 2075-2084
  • Tidskriftsartikel (refereegranskat)abstract
    • We investigate the parallel scaling of the GROMACS molecular dynamics code on Ethernet Beowulf clusters and what prerequisites are necessary for decent scaling even on such clusters with only limited bandwidth and high latency. GROMACS 3.3 scales well on supercomputers like the IBM p690 (Regatta) and on Linux clusters with a special interconnect like Myrinet or Infiniband. Because of the high single-node performance of GROMACS, however, on the widely used Ethernet switched clusters, the scaling typically breaks down when more than two computer nodes are involved, limiting the absolute speedup that can be gained to about 3 relative to a single-CPU run. With the LAM MPI implementation, the main scaling bottleneck is here identified to be the all-to-all communication which is required every time step. During such an all-to-all communication step, a huge amount of messages floods the network, and as a result many TCP packets are lost. We show that Ethernet flow control prevents network congestion and leads to substantial scaling improvements. For 16 CPUs, e.g., a speedup of 11 has been achieved. However, for more nodes this mechanism also fails. Having optimized an all-to-all routine, which sends the data in an ordered fashion, we show that it is possible to completely prevent packet loss for any number of multi-CPU nodes. Thus, the GROMACS scaling dramatically improves, even for switches that lack flow control. In addition, for the common HP ProCurve 2848 switch we find that for optimum all-to-all performance it is essential how the nodes are connected to the switch's ports. This is also demonstrated for the example of the Car-Parinello MD code.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-3 av 3

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy