Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

scalability -- mpi parallelization #62

Open
mhoeppner opened this issue May 7, 2012 · 1 comment
Open

scalability -- mpi parallelization #62

mhoeppner opened this issue May 7, 2012 · 1 comment

Comments

@mhoeppner
Copy link

I got a few questions regarding the implemented parallelism in triqs:
(1) I linked triqs successfully against the intel mpi and mkl (all tests pass). But if I try to run any of the examples in parallel mode (mpirun -np 2 ...) -- even only on one machine -- the task is started in parallel (e.g. twice), but does not seem to share any information -- so it runs n-times the same task. Do you have any suggestions, where I should look for the error?

(2) I had a short look at the source code, and ask myself what parts of triqs are parallelized. As far as I have got, I think the implemented solver routines are parallized (e.g. hybridization expansion - CT QMC). Am I right?

Thanks for our support,
Marc

@mferrero
Copy link
Member

mferrero commented May 7, 2012

Hi Marc. There is only a quite small subset of TRIQS that will run in parallel without an explicit input of the user. Basically:

  1. The CTQMC solver. It is a Monte Carlo algorithm and will deploy over the nodes if it is run in parallel.
  2. The sums over k-points. The k-sums in the Wien2TRIQS modules and in Base/SumK/SumK_Discrete.py will be split over the nodes. The same is true for the Hilbert transform in Base/DOS/Hilbert_Transform.py

Except for the two cases above, the parallelism has to be taken care of by the user. This is made easier with the pytriqs.Base.Utility.MPI module (which is not described yet in the documentation, sorry). I think you should catch a look at the module. You will see that it has the usual MPI commands like bcast or send, that you can apply on essentially any python object. For example, the following script would read a Green's function on the master node and broadcast it to the other nodes:

from pytriqs.Base.GF_Local import *
from pytriqs.Base.Archive import *
from pytriqs.Base.Utility import MPI

G = GFBloc_ImFreq(Indices=[1], Beta=100)
if MPI.IS_MASTER_NODE():
  A = HDF_Archive("my_archive.h5")
  G = A['Green']
G <<= MPI.bcast(G)

There is another simple example where MPI is used to write in an archive in the documentation:

http://ipht.cea.fr/triqs/doc/user_manual/solvers/dmft/dmft.html

Hope this helps!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants