Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bug in WeightedPersSampleLoss #203

Closed
vrodriguezf opened this issue Sep 10, 2021 · 3 comments
Closed

Bug in WeightedPersSampleLoss #203

vrodriguezf opened this issue Sep 10, 2021 · 3 comments
Labels
bug Something isn't working

Comments

@vrodriguezf
Copy link
Contributor

This code:

dsid = 'NATOPS' 
X, y, splits = get_UCR_data(dsid, return_split=False)
learn = TSClassifier(X, y, splits=splits, bs=[64, 128], 
                     batch_tfms=[TSStandardize()], 
                     arch=InceptionTime,
                     cbs = [WeightedPerSampleLoss(np.arange(len(X)))],
                     metrics=accuracy)
learn.fit_one_cycle(25, lr_max=1e-3)
learn.plot_metrics()

leaves this output:

TypeError: int() argument must be a string, a bytes-like object or a number, not 'slice'

The line of the callback that is failing is this one

I think it has to do with the fact that learn.dls.train.idxs is a list, but learn.dls.valid.idxs is a slice object, and the callback expects a list.

Not sure how to fix it though :)

Thanks!!!

@oguiza oguiza added the bug Something isn't working label Sep 10, 2021
@oguiza
Copy link
Contributor

oguiza commented Sep 10, 2021

Hi @vrodriguezf,

I've checked this and there's indeed a bug.
Could you please try this code? I think it works well. If so, I'll add it to the code base and load it to GitHub.

class WeightedPerSampleLoss(Callback):
    order = 65

    def __init__(self, instance_weights):
        store_attr()

    def before_fit(self):
        self.old_loss = self.learn.loss_func
        self.reduction = getattr(self.learn.loss_func, 'reduction', None)
        self.learn.loss_func = _PerInstanceLoss(crit=self.learn.loss_func)
        assert len(self.instance_weights) == len(self.learn.dls.train.dataset) + len(self.learn.dls.valid.dataset)
        self.instance_weights = torch.as_tensor(self.instance_weights, device=self.learn.dls.device)

    def before_batch(self):
        input_idxs = self.learn.dls.train.input_idxs if self.training else self.learn.dls.valid.input_idxs
        self.learn.loss_func.weights = self.instance_weights[input_idxs]

    def after_fit(self):
        self.learn.loss_func = self.old_loss
        if self.reduction is not None: self.learn.loss_func.reduction = self.reduction


class _PerInstanceLoss(Module):
    def __init__(self, crit):
        self.crit = crit
        self.crit.reduction = 'none'
        self.weights = None

    def forward(self, input, target):
        return (self.crit(input, target) * self.weights / self.weights.sum()).sum()

Please, let me know if it works.

@vrodriguezf
Copy link
Contributor Author

It works like a charm :) thank you @oguiza !!!

@oguiza oguiza closed this as completed in 50f8767 Sep 10, 2021
@oguiza
Copy link
Contributor

oguiza commented Sep 10, 2021

cc: @vrodriguezf
I've just loaded the updated code to GitHub.
Thanks for raising this!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants