You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I think it used to be the case that the Julia process numbers and the ranks of the MPI processes had the same ordering. On fccdb09 I get (have tried five times now)
julia> manager =MPIManager(np =4)
MPI.MPIManager(4,`mpirun -np 4 --output-filename /tmp/user/1021/juliazHWR3m`,"/tmp/user/1021/juliazHWR3m",60,Dict{Int64,Int64}(),Dict{Int64,Int64}(),RemoteRef(1,1,3),false)
julia>addprocs(manager)
4-element Array{Any,1}:2345
julia>@mpi_do manager println(MPI.Comm_rank(MPI.COMM_WORLD))
From worker 4:2
From worker 2:0
From worker 3:1
From worker 5:3
whereas I on laster MPI.jl master get
julia>@mpi_do manager println(MPI.Comm_rank(MPI.COMM_WORLD))
From worker 4:1
From worker 2:3
From worker 3:0
From worker 5:2
which makes it a bit hard to use DArrays with MPI. I'm I right that they used to be ordered similarly and would be possible to do in the future?
No problem. I have a temporary solution in ScaLAPACK.jl where I add a method to DArray that takes an MPIManager as argument and together with a check of of the ordering I think this is sufficient for now, so this issue is not blocking work on and with ScaLAPACK.jl.
I talked with @jakebolewski about having a process type instead of just an integer. This could encode mode information about the specific process, e.g. if MPI transport is available and in that case the MPI rank.
I think it used to be the case that the Julia process numbers and the ranks of the MPI processes had the same ordering. On fccdb09 I get (have tried five times now)
whereas I on laster MPI.jl master get
which makes it a bit hard to use DArrays with MPI. I'm I right that they used to be ordered similarly and would be possible to do in the future?
cc: @amitmurthy
The text was updated successfully, but these errors were encountered: