You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In my case, evaluating a ~ 12GB matrix in the REPL caused a ~ 28GB spike (12 + 28) in GPU memory usage. (Of course, the memory is released once the evaluation is done...)
The text was updated successfully, but these errors were encountered:
We just convert the sparse GPU array to a sparse CPU array and use SparseArray.jl's output methods; there's nothing special about evaluation in the REPL. So if there's a surprising memory use, that should be reproducible by just calling these conversion methods.
Anyway, I can't reproduce:
julia>using CUDA, SparseArrays
julia> x =cu(sprand(10000, 10000, 0.01));
julia> CUDA.memory_status()
Effective GPU memory usage:0.94% (459.625 MiB/47.504 GiB)
Memory pool usage:7.660 MiB (32.000 MiB reserved)
julia> x
10000×10000 CuSparseMatrixCSC{Float32, Int32} with 998972 stored entries:...
julia> CUDA.memory_status()
Effective GPU memory usage:0.94% (459.625 MiB/47.504 GiB)
Memory pool usage:7.660 MiB (32.000 MiB reserved)
Please, when filing bugs, always include an actual reproducer, as suggested by the bug filing template. That template also asks about crucial information, like the CUDA.jl version, Julia version, etc.
Here's how to reproduce:
In my case, evaluating a ~ 12GB matrix in the REPL caused a ~ 28GB spike (12 + 28) in GPU memory usage. (Of course, the memory is released once the evaluation is done...)
The text was updated successfully, but these errors were encountered: