Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dropout speed issues #542

Closed
telmop opened this issue May 15, 2017 · 7 comments · Fixed by #1154
Closed

Dropout speed issues #542

telmop opened this issue May 15, 2017 · 7 comments · Fixed by #1154

Comments

@telmop
Copy link

telmop commented May 15, 2017

Hi there!

I have been experiencing some issues using dropout. The thing is that it makes my system many times slower to train. Before dropout it took ~6h in GPU to run a full epoch. Now, it takes ~24h. That's a 4 fold increase. The only change in my code was do use dropout in the hidden layers of the net. Also, GPU usage is now way down, and a lot of time it actually reaches 0, which didn't happen before.

Any clues?

@pmichel31415
Copy link
Collaborator

Hmm that is interesting, I certainly never had such slowdown with dropout. Maybe this issue is linked to this one #438 ?

@telmop
Copy link
Author

telmop commented May 15, 2017

That seems a plausible explanation.

@pmichel31415
Copy link
Collaborator

I will see if I can address this this week

@pmichel31415
Copy link
Collaborator

Could you provide some minimal reproducible code so that I can test the changes?

@telmop
Copy link
Author

telmop commented May 15, 2017

I just used the standard dropout. i.e., something like this:

import dynet as dy
...
y_dropped = dy.dropout(y, dropout_prob)

@pmichel31415
Copy link
Collaborator

So upon more discussion, it seems that this problem needs some work upstream in eigen, so it'll probably take more time to fix.

@telmop
Copy link
Author

telmop commented May 18, 2017

Ok, thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants