Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to evaluate on YouTube-Objects and Long-Videos datasets? #4

Open
xyzskysea opened this issue Aug 11, 2022 · 11 comments
Open

How to evaluate on YouTube-Objects and Long-Videos datasets? #4

xyzskysea opened this issue Aug 11, 2022 · 11 comments

Comments

@xyzskysea
Copy link

How to evaluate on YouTube-Objects and Long-Videos datasets?
and where to download corresponding dataset with groundtruth? could you provide the results of these two datasets?

@CODE4UVOS
Copy link
Contributor

Please refer to our related project.

@xyzskysea
Copy link
Author

i follow [The YouTube-Objects dataset can be downloaded from here and annotation can be found here.].

Do you use the dataset release v2.3? if possible, could you please provide the clean-up dataset and annotations directly? goole drive or baidu?

@CODE4UVOS
Copy link
Contributor

The YouTube-Objects dataset has already been uploaded to GoogleDirve.

@xyzskysea
Copy link
Author

In the YO2SEG, the Annotations are the groundtruth, mask folder is the results. Right?

@CODE4UVOS
Copy link
Contributor

'Annotation' is GT (0,255), 'mask' is also GT (0,1).

@xyzskysea
Copy link
Author

ok. i tried that. but found that the size of aeroplane0001/00001.png is 854x480. but in the results of HGPU it is 480x360. And i also found that in the AGNN results, the size is also 480x360. So which is the correct images and groundtruth?
it's weird!

@CODE4UVOS
Copy link
Contributor

They are both correct. '480x360' is the original size and '854x480' is after we resized it.

@xyzskysea
Copy link
Author

but when i tried to use PyDavis16EvalToolbox to evaluate. it mentions that the size is not consistent. so how can i do it to reproduce the result in your paper?

@CODE4UVOS
Copy link
Contributor

The validation tool library is available at davis-matlab.

@chengguanjun
Copy link

@xyzskysea Could you test on YouTube-Objects dataset to achieve the results in the paper?

@qian507
Copy link

qian507 commented Jun 15, 2023

The YouTube-Objects dataset has already been uploaded to GoogleDirve.

Why does the YouTube-VOS data only contain val, but not train?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants