Replies: 1 comment 2 replies
-
You're leaving out a lot of detail here (like what optimization algorithms you're using, what's inside of mycode.py, etc). But speaking abstractly: if everything about your setup is deterministic, you should get the exact same results regardless of the number of cores you use. mpi4py/mpi/distributed computing won't affect this. Similarly, your IDE shouldn't affect things either. If you have a random initial condition, and you aren't seeding your initial condition the same way on all processes, then they will be out of sync, and you'll get garbage. If you are using a stochastic optimization algorithm, and you aren't seeding things the same on all processes, then likewise, you'll get garbage. Similarly, if you are explicitly only calling certain routines on a single process (eg with mp.am_master()) then you could also get garbage. |
Beta Was this translation helpful? Give feedback.
-
Hello, I installed Parallel Meep, and when performing multi-core parallel computing using the command line [mpirun -np 4 python -m mpi4py mycode.py], I got different optimization results compared to running the same code in PyCharm for single-core computation. I would like to know if this is due to different optimization algorithms being used or if the logic of the program execution differs between the two? I designed a 3D splitter for average power splitting, based on this case (#2852). In the single-core calculation, the optimized results show transmission rates of 0.494/0.495 for the two ports. However, in the parallel calculation, the final optimized transmission rates for the two ports are 0.35/0.64. I’m not very familiar with the mpi4py execution logic, and I would appreciate any insights or suggestions you can provide. @smartalecH @oskooi
Beta Was this translation helpful? Give feedback.
All reactions