-
Notifications
You must be signed in to change notification settings - Fork 97
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fixed a bug in the counstrained source term under MPI #33
Conversation
9f3dd75
to
ca06b3b
Compare
@WaltherM Just updated. |
I see. I was just testing it at the momemt... I'll rebase and let you know about the status... |
can't test this due to #34 |
@norihiro-w Updated according to your comments. |
@@ -26,6 +26,9 @@ | |||
/*--------------------- MPI Parallel -------------------*/ | |||
#if defined(USE_MPI) || defined(USE_MPI_PARPROC) || defined(USE_MPI_REGSOIL) | |||
#include <mpi.h> | |||
#ifdef OGS_FEM_IPQC |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sorry after #35, we don't need this #ifdef anymore. SplitMPI_Communicator.h is always used if USE_MPI
@wenqing you can keep the original commit and remove the last commit including my comments. |
20e4429
to
ca06b3b
Compare
@norihiro-w Recovered. |
I will merge this after #35 |
@norihiro-w Thanks. @WaltherM should give a test before it is merged. |
will test after #35 is merged and this is rebased. |
#35 was merged |
@wenqing could you please help me to rebase this - I tried it on master, but ogs crashes with [node002:13167] *** An error occurred in MPI_Allreduce |
@WaltherM Rebased. If it still crashes, please send me your input files. |
#else | ||
MPI_Comm comm = comm_DDC; | ||
#endif | ||
MPI_Allreduce(&st_conditionst, &int_buff, 1, MPI_INT, MPI_SUM, comm); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@WaltherM collective communication function can’t be called here because that not call partitions satisfy the condition of (line 8044)
if (st_node.size() > 0 && (long) st_node.size() > i)
The means some compute cores skip the if-condition, while the other move inside the if-condition to call MPI_Allreduce
, which leads to a dead lock here.
Due to this reason, I have to close this PR and want to discuss the issue with you later.
@wenqing ok, lets discuss this on Monday. |
As titled.