-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to evaluate on YouTube-Objects and Long-Videos datasets? #4
Comments
Please refer to our related project. |
The YouTube-Objects dataset has already been uploaded to GoogleDirve. |
In the YO2SEG, the Annotations are the groundtruth, mask folder is the results. Right? |
'Annotation' is GT (0,255), 'mask' is also GT (0,1). |
ok. i tried that. but found that the size of aeroplane0001/00001.png is 854x480. but in the results of HGPU it is 480x360. And i also found that in the AGNN results, the size is also 480x360. So which is the correct images and groundtruth? |
They are both correct. '480x360' is the original size and '854x480' is after we resized it. |
but when i tried to use PyDavis16EvalToolbox to evaluate. it mentions that the size is not consistent. so how can i do it to reproduce the result in your paper? |
The validation tool library is available at davis-matlab. |
@xyzskysea Could you test on YouTube-Objects dataset to achieve the results in the paper? |
Why does the YouTube-VOS data only contain val, but not train? |
How to evaluate on YouTube-Objects and Long-Videos datasets?
and where to download corresponding dataset with groundtruth? could you provide the results of these two datasets?
The text was updated successfully, but these errors were encountered: