Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(train): automatically decide batch_size #342

Merged
merged 2 commits into from
Apr 15, 2023
Merged

feat(train): automatically decide batch_size #342

merged 2 commits into from
Apr 15, 2023

Conversation

34j
Copy link
Collaborator

@34j 34j commented Apr 15, 2023

No description provided.

@34j 34j merged commit 8ffa128 into main Apr 15, 2023
@34j 34j deleted the feat/use-tuner branch April 15, 2023 14:59
@soliloqueen
Copy link

Doesn't batch size determine more than speed? Like if you increase it, the accuracy can go down

@markrmiller
Copy link

You can make up for model performance loss by also adjusting the learning rate. You may still get weaker generalization though.

But I'd bet if you are already using at least 8, and you generally want a multiple of 8, the difference between 8/16/32 here is not going to be wild with no other changes.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants