You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I encountered the following error when training r2-gaussian on a custom dataset.
vol_pred, radii = voxelizer(
File ".../miniconda3/envs/r2_gaussian/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File ".../miniconda3/envs/r2_gaussian/lib/python3.9/site-packages/xray_gaussian_rasterization_voxelization/voxelization.py", line 259, in forward
return voxelize_gaussians(
File ".../miniconda3/envs/r2_gaussian/lib/python3.9/site-packages/xray_gaussian_rasterization_voxelization/voxelization.py", line 49, in voxelize_gaussians
return _VoxelizeGaussians.apply(
File ".../miniconda3/envs/r2_gaussian/lib/python3.9/site-packages/xray_gaussian_rasterization_voxelization/voxelization.py", line 123, in forward
) = _C.voxelize_gaussians(*args)
RuntimeError: numel: integer multiplication overflow
This occurred at iteration 5000: Train: 25%|██▌ | 5000/20000 [4:04:20<13:13:43, 3.17s/it, loss=1.9e+00, pts=5.6e+05]
I set densify_until_iter to 1000 iterations for this. The training speed kept decreasing even after iter 1000. I am using an A6000 GPU with 48GB of memory. The dataset I am using has a volume with dimension [500, 500, 500] and projections with dimension [768, 972].
Setting densify_until_iter to 0 allowed the training to complete but didn't produce any sensible results. How should I debug this?
The text was updated successfully, but these errors were encountered:
Hi, I apologize for the late response as I was on vacation. I didn’t encounter this error during development, but it seems related to the voxelizer. I suggest disabling TV loss to prevent the voxelizer from being called during training. TV loss typically slightly improves performance but can slow down training. The Gaussian number looks fine, so densification likely isn’t the issue.
Hi, I encountered the following error when training r2-gaussian on a custom dataset.
This occurred at iteration 5000:
Train: 25%|██▌ | 5000/20000 [4:04:20<13:13:43, 3.17s/it, loss=1.9e+00, pts=5.6e+05]
I set
densify_until_iter
to 1000 iterations for this. The training speed kept decreasing even after iter 1000. I am using an A6000 GPU with 48GB of memory. The dataset I am using has a volume with dimension[500, 500, 500]
and projections with dimension[768, 972]
.Setting
densify_until_iter
to 0 allowed the training to complete but didn't produce any sensible results. How should I debug this?The text was updated successfully, but these errors were encountered: