Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Preparing dataloaders in two accelerator.prepare calls when deepspeed is enabled issue. #1101

Closed
lqtrung1998 opened this issue Feb 22, 2023 · 2 comments · Fixed by #1126
Closed

Comments

@lqtrung1998
Copy link

Hi, I recently encounter this:

## Deepspeed enabled
dataloader1 = accelerator.prepare(dataloader1) # correctly prepared 
dataloader2 = accelerator.prepare(dataloader2) #  not correctly prepared

if deepspeed_plugin.deepspeed_config["train_micro_batch_size_per_gpu"] == "auto":

Correct me if I'm wrong but ... It seems that when deepspeed is enabled, only in the first prepare that the dataloader is prepared ... after the first prepare, I guess the train_micro_batch_size_per_gpu will not be 'auto', thus, the original dataloader2 is returned.

Is it a bug or it is expected and this has been mentioned somewhere in the doc ?

Thank you,
Trung

@sgugger
Copy link
Collaborator

sgugger commented Feb 22, 2023

cc @pacman100

@pacman100
Copy link
Contributor

pacman100 commented Feb 28, 2023

Hello @lqtrung1998, please let us know if the above PR fixes the issue

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants