You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for pointing it out. It seems this issue is introduced by new pytorch/numpy versions. (It still works at least last month.)
I'd prefer to keep loss_weights as double and transform to float when moving to torch.Tensor, because numpy default precision is double and pytorch precision is float.
When loading loss weights from GTSRB dataset, I found the default type of loss weights is np.float64, which will generate the following error:
And it may be fixed to add the following strict type conversion before line 182, trojanzoo/trojanzoo/models.py:
The text was updated successfully, but these errors were encountered: