You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank for your work. I integrated your model into my project (with slightly changes). My program published at https://github.com/lotress/MoePhoto. Also I'm working on some improvement including this.
The problem about 3 Vimeo90K models is they didn't trained on configuration other than 2x slomo, the result is they are insensitive about the embt input. They output almost the same predictions no matter what embt is. I can only use the GoPro models now, but they have a tiny more illusion than the Vimeo90K models, maybe they are both under trained.
The text was updated successfully, but these errors were encountered:
The provided IFRNet, IFRNet-L and IFRNet-S trained on Vimeo-90K have the same output frame when changing embt, since the convolution weight which multiple embt is set to 0. As for the model trained on GoPro, it suffers from less training data, whose results are not as well as models trained on Vimeo-90K.
I suggest that you can build a large multi-frame training datasets and train IFRNet referring to the training scripts on GoPro dataset. I will also do this later.
Thank for your work. I integrated your model into my project (with slightly changes). My program published at https://github.com/lotress/MoePhoto. Also I'm working on some improvement including this.
The problem about 3 Vimeo90K models is they didn't trained on configuration other than 2x slomo, the result is they are insensitive about the
embt
input. They output almost the same predictions no matter whatembt
is. I can only use the GoPro models now, but they have a tiny more illusion than the Vimeo90K models, maybe they are both under trained.The text was updated successfully, but these errors were encountered: