-
Notifications
You must be signed in to change notification settings - Fork 125
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
diagnostics of restart run #20
Comments
In principle it should be possible since the checkpoint files contain only particles and fields. @mccoys any ideas why we store the Screen diag number as attribute in the checkpoint? the warning comes from this block: |
Ok, I see the problem is that the screen is "incremental" and we need to store the particles that crossed the screen. In your case, the screen "appears" at restart timestep, the diagnostic should be created and the values should restart from 0. |
I did not fully understand why extra diagnostics at the restart run have a problem. Note that it does not have any problem when the diagnostics at the restart run is the same as that of the initial run. Here is some extra error messages for one of the mpi processes besides the warning: HDF5-DIAG: Error detected in HDF5 (1.8.18) MPI-process 289: |
The issue is that the Screen need to integrate all the data collected since the beginning of the simulation. Every timestep, it increments its arrays of data with the new data collected from this timestep. This is why the Screen needs to store its data in the checkpoints. When you restart the simulation, the Screen expects to find its previous data stored in the checkpoint. In your case, as you did not have a Screen in your first simulation, there was no stored Screen data, and the restart failed. We can still fix this problem by forcing the data to be put at 0, even when the diag did not exist before. We should be able to file a bugfix soon. |
Hi @phyax, On the other end, the the HDF5 errors you mention later, look like the patches structure changed between the two runs. What changes did you make between the first run and restart? 1There might be an issue in case you changed the Screen size between restart. We are investigating this |
Hi @iltommi ,
In fact, I tried to use DiagParticles in the restart run at the beginning. It did not work. Then I tried to use DiagScreen. It also did not work. Perhaps I need to have these diagnostics in the first run so that they can work in the restart run. |
We did some modifications to the code (but these should not impact much the Anyway, here is what I tested (it's based on suppose you're on the smilei root: mkdir run{1,2} split the bechmark file in two at line 100 (separating split -l 100 benchmarks/tst1d_4_radiation_pressure_acc.py
mv xaa run1/run1.py
mv xab run2/run2.py run the first simulation (it will create checkpoint files at timestep 10000): cd run1
mpirun -np 4 ../smilei run1.py 'DumpRestart(dump_step = 10000)' and run the restart (with the cd ../run2
mpirun -np 4 ../smilei ../run1/run1.py run2.py 'DumpRestart(restart_dir="../run1")' And here are the resulting files:
Can you confirm this behaviour? |
Hi @iltommi , I tried the example you gave and got the same results as yours. I am not really sure what caused the error of restart diagnostics in my large scale runs at this time. |
At what time during the restarted simulation did this problem appear? |
Oh, another possibility. Do you have time-averaged DiagFields? If yes, then the version of the code here on GitHub may require to be patched before you can restart properly the simulation. |
I did some research on the hdf5 error and found that it is not a restart issue. It is due to the number of grids in the velocity space being too large. To reproduce my error, in the DiagScreen part of the input deck ../benchmarks/tst1d_4_radiation_pressure_acc.py, replace "axes = [["ekin", 0., 0.4, 10]]" with axes = [["vx", -1., 1., 30], You will find the hdf5 error message without doing restart. But it seems some screen data is still dumped. I did not check if these data is usable. The reason I need such a fine grid is that I intend to correlate the particle gyro-phase with the wave phase for different perpendicular and parallel velocity. Is there anyway to do it at this time? |
@phyax, I cannot reproduce this error, and I think the problem is somewhere else. In your original error log, you can see the error If you have another problem with the velocity space being too large, please post your error log. We can investigate what is going on, but I would be very surprised a space of 30x30x30 points be too much. Technically, HDF5 should be able to support 1000x1000x1000 at least. |
I update my version with the new commit. The error is still there. Please see the file run1.py.txt in the attachment. The only changes I made compared to tst1d_4_radiation_pressure_acc.py are:
Here is the screen shot of error message: The simulation is still completed though. These error messages were gone if I remove the "DumpRestart" block. |
Hi!
Looking at your error message, this reminds that I've encountered something
similar once.
I'm not sure what happened, but it was always when I had some old files in
the folder where Smilei was running.
Also, it might have been because I was running low on disk free space.
…On Wed, Oct 25, 2017 at 9:02 PM, phyax ***@***.***> wrote:
Hi @iltommi <https://github.com/iltommi> @mccoys
<https://github.com/mccoys> ,
I update my version with the new commit. The error is still there. Please
see the file run1.py.txt in the attachment. The only changes I made
compared to tst1d_4_radiation_pressure_acc.py are:
1. "axes" in the DiagScreen
2. add "DumpRestart"
run1.py.txt
<https://github.com/SmileiPIC/Smilei/files/1415797/run1.py.txt>
Here is the screen shot of error message:
[image: error]
<https://user-images.githubusercontent.com/20483877/32017517-d2bb8af0-b97b-11e7-8fcc-8a36c396718f.png>
The simulation is still completed though. These error messages were gone
if I remove the "DumpRestart" block.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#20 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ARsfTJsl3mA0g2x-OmQJLHwgp2WiJI0Fks5sv4XfgaJpZM4QB0xr>
.
--
-------------------------------------------------------------
Mickael Grech
Chargé de recherche CNRS
---
Laboratoire d'Utilisation des Lasers Intenses
Ecole Polytechnique
Route de Saclay
91128 Palaiseau Cedex, France
---
tel.: +33 (0)1 69 33 54 16
gsm: +33 (0)6 95 56 48 43
mickael.grech@polytechnique.edu
-------------------------------------------------------------
|
@phyax, the error that you obtain is now a different one. It seems that the problem is related to storing DiagScreen in the checkpoint files. It is currently stored as an HDF5 attribute, not a proper dataset. It turns out that some versions of HDF5 restrict the size of the attributes in a way I am not sure I understand yet. I will look at the possibility to change the way it works: the DiagScreen information will be stored as a proper dataset. |
@phyax can you confirm the patch works? I tested your case like this: 2 run files (first withoud diags and second just diags for the restart):
# ----------------------------------------------------------------------------------------
# SIMULATION PARAMETERS FOR THE PIC-CODE SMILEI
# ----------------------------------------------------------------------------------------
import math
l0 = 2.0*math.pi # laser wavelength
t0 = l0 # optical cicle
Lsim = 10.*l0 # length of the simulation
Tsim = 40.*t0 # duration of the simulation
resx = 500. # nb of cells in on laser wavelength
rest = resx/0.95 # time of timestep in one optical cycle (0.95 * CFL)
# plasma slab
def f(x):
if l0 < x < 2.0*l0:
return 1.0
else :
return 0.0
Main(
geometry = "1d3v",
interpolation_order = 2 ,
cell_length = [l0/resx],
sim_length = [Lsim],
number_of_patches = [ 8 ],
timestep = t0/rest,
sim_time = Tsim,
bc_em_type_x = ['silver-muller'],
random_seed = smilei_mpi_rank
)
Species(
species_type = 'ion',
initPosition_type = 'regular',
initMomentum_type = 'cold',
n_part_per_cell = 10,
mass = 1836.0,
charge = 1.0,
nb_density = trapezoidal(10.,xvacuum=l0,xplateau=l0),
temperature = [0.],
bc_part_type_xmin = 'refl',
bc_part_type_xmax = 'refl'
)
Species(
species_type = 'eon',
initPosition_type = 'regular',
initMomentum_type = 'cold',
n_part_per_cell = 10,
mass = 1.0,
charge = -1.0,
nb_density = trapezoidal(10.,xvacuum=l0,xplateau=l0),
temperature = [0.],
bc_part_type_xmin = 'refl',
bc_part_type_xmax = 'refl'
)
LaserPlanar1D(
boxSide = 'xmin',
a0 = 10.,
omega = 1.,
ellipticity = 1.,
time_envelope = tconstant(),
)
every = int(rest/2.)
DumpRestart(
restart_dir = None,
dump_step = 10000,
dump_minutes = 0., # dump before maximum wall-clock time
dump_deflate = 0,
exit_after_dump = True,
dump_file_sequence = 2,
) run the sim: and DiagFields(
every = every,
fields = ['Ex','Ey','Ez','Rho_ion','Rho_eon']
)
DiagScalar(every=every)
DiagParticles(
output = "density",
every = every,
species = ["ion"],
axes = [
["x", 0., Lsim, 200],
["px", -10., 1000., 200]
]
)
DiagParticles(
output = "density",
every = every,
species = ["ion"],
axes = [
["ekin", 0., 200., 200, "edge_inclusive"]
]
)
for direction in ["forward", "backward", "both", "canceling"]:
DiagScreen(
shape = "sphere",
point = [0.],
vector = [Lsim/3.],
direction = direction,
output = "density",
species = ["eon"],
axes = [["ekin", 0., 0.4, 30],
["vx", -1., 1., 30],
["vy", -1., 1., 30],],
every = 3000
)
DiagScreen(
shape = "plane",
point = [Lsim/3.],
vector = [1.],
direction = direction,
output = "density",
species = ["eon"],
axes = [["ekin", 0., 0.4, 30],
["vx", -1., 1., 30],
["vy", -1., 1., 30],],
every = 3000
)
DumpRestart.restart_dir="../run1" run the restart
|
@iltommi @mccoys I confirm that the last patch fixes the conflicts between data dump and DiagScreen. Thank you! I have not looked at the output of screen diag yet. But does the histogram by screen diag counting particles during a time period defined by 'every', by Dt of the simulation, or from the beginning of the simulation? I read through the doc but did not find the answer. I understand that the output frequency of screen diag is 'every'. But I am not sure during which time period the screen diag accumulates particle data. |
@phyax, the data of a Screen is accumulated since the beginning of the simulation, or, in your case, since the point when you introduced the diagnostic. We will make this clearer in the doc. Thanks for this report. |
Hi,
I have some large scale runs and need to restart the simulation every 12 hours. I checked the initial results and found some interesting time chunk that I should take a closer look. So more diagnostics (specifically, screen diagnostics) were added in the restart run, which was not present in the initial run. But the output came up with the warning:
[WARNING] Cannot find attribute DiagScreen0
The result is that the screen diagnostics were not written to disk. I would like to check that if adding more diagnostics is allowed in the restart run.
Thanks,
Xin
The text was updated successfully, but these errors were encountered: