You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
For IterableDataset, we may not know the length of the dataset in advance. Running validation every X examples would be helpful.
The text was updated successfully, but these errors were encountered:
If you set val_check_interval to (batch_size / len(train_dataset)) * nb_batches validating every nb_batches works. Just be sure to make the DataLoaders you use shuffle the data, otherwise the same data will be used over and over again for validation.
Also the ModelCheckpoint-call back has to be adjusted:
Early-stopping seems not be influenced by the validation frequency.
Only caveat is that the validation batches are sampled now randomly from the validation dataset, and it's not guaranteed that all data is used after one epoch. Not too sure if that's an issue tho.
Is your feature request related to a problem? Please describe.
For IterableDataset, we may not know the length of the dataset in advance. Running validation every X examples would be helpful.
The text was updated successfully, but these errors were encountered: