Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can't reproduce the same RC on longest6 Benchmark #191

Closed
JinRanYAO opened this issue Sep 4, 2023 · 6 comments
Closed

Can't reproduce the same RC on longest6 Benchmark #191

JinRanYAO opened this issue Sep 4, 2023 · 6 comments

Comments

@JinRanYAO
Copy link

Hello, Thank you for your excellent job.
I evaluate the model on longest6 Benchmark, with your open-sourced pretrained model and model retrained by myself, but both of them can't reproduce the same result on longest6 Benchmark, especially RC. Emseble 3 models can't get the same RC too.
For example, results by result_parser(emsemble 3 open-sourced pretrained model):
Avg. driving score,46.410666666666664 Avg. route completion,82.96050000000001 Avg. infraction penalty,0.591 Collisions with pedestrians,0.0660326965648726 Collisions with vehicles,1.6361434815518432 Collisions with layout,0.029347865139943377 Red lights infractions,0.1320653931297452 Stop sign infractions,0.39619617938923557 Off-road infractions,0.03546616132499308 Route deviations,0.0 Route timeouts,0.0660326965648726 Agent blocked,0.29347865139943374
I see the same question in #176 , and run CARLA Server with -opengl option, with block threshold 180.0.
Compared with results in paper:
image
There is less collision with vehicles, but more agent blocked. Could you tell me is there something wrong? Thanks!

@Kait0
Copy link
Collaborator

Kait0 commented Sep 4, 2023

You seem to get a similar result as the mentioned issue. It doesn't look too wrong and I don't have any other particular insight here.
Could you send us the information about which GPU type and operating system you used to evaluate?
I have seen a couple of people have this kind of issue across various projects, so I want to keep track to see if there are any common factors.

@JinRanYAO
Copy link
Author

JinRanYAO commented Sep 5, 2023

OK, thank you for your reply. I use Ubuntu 18.04 operating system and nvidia RTX 3090 GPU.

Could you please tell me the reason if you find? Thank you very much!

@Kait0
Copy link
Collaborator

Kait0 commented Sep 5, 2023

image
30xx series GPUs are known to produce these black rendering artifacts with CARLA 0.9.10.1, that older GPUs don't produce.
Made this screenshot on my local computer which also has a 3090 GPU.

Seems to be a driver bug in unreal, so I don't really have a solution.
The neural network will likely be able to handle these bug if you train it with data collected with these GPUs but the pretrained checkpoints were trained with data collected by 2080ti gpus.

@JinRanYAO
Copy link
Author

OK, thank you for your reply. How long does it take to re-collect 280k data with 30xx series GPUs?

@Kait0
Copy link
Collaborator

Kait0 commented Sep 7, 2023

Depends primarily how many GPUs you have. With 32x 3090 I would guess 1-2 days. If you only have a single GPU its probably too slow.
You can find an example of how to parallelize data collection in our latest repo.

@JinRanYAO
Copy link
Author

Oh, it is really too slow. I will try to find 20xx series GPUs to reproduce the results. Thank you very much!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants