You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently src/jgi-denovo.sh and src/jgi-preproc.sh are hard-coded to use 64 GB of memory. For someone testing this pipeline on a smaller machine, the parallel Java jobs will spit out an error message like this:
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 22906667008 bytes for committing reserved memory.
# An error report file with more information is saved as:
# /home/octopus/hs_err_pid929156.log
On larger runs, requesting slightly more memory than you have available will not give you as immediate error but crash later during one of these stages.
For someone running this for the first time, it's not immediately clear where the error in the pipeline is coming from and that they need to edit the script. Obviously, one can just edit the line `MEM="-Xmx6g", but we should be able to catch this in advance.
At minimum, I would propose changing the default to 6 or 8 GB. However, there should be a way to automatically use something like 75% of the available memory and we should also expose this as a command line option.
The text was updated successfully, but these errors were encountered:
Note: variant-calling.sh and guided-assembly.sh also have hard-coded memory values, but don't seem to crash like the JGI scripts. It still would be nice to make these scripts more flexible as described above.
Currently
src/jgi-denovo.sh
andsrc/jgi-preproc.sh
are hard-coded to use 64 GB of memory. For someone testing this pipeline on a smaller machine, the parallel Java jobs will spit out an error message like this:On larger runs, requesting slightly more memory than you have available will not give you as immediate error but crash later during one of these stages.
For someone running this for the first time, it's not immediately clear where the error in the pipeline is coming from and that they need to edit the script. Obviously, one can just edit the line `MEM="-Xmx6g", but we should be able to catch this in advance.
At minimum, I would propose changing the default to 6 or 8 GB. However, there should be a way to automatically use something like 75% of the available memory and we should also expose this as a command line option.
The text was updated successfully, but these errors were encountered: