Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement / revive convergence test #8

Closed
3 tasks
justheuristic opened this issue Mar 2, 2020 · 1 comment
Closed
3 tasks

Implement / revive convergence test #8

justheuristic opened this issue Mar 2, 2020 · 1 comment
Assignees
Labels
enhancement New feature or request

Comments

@justheuristic
Copy link
Member

It would be very useful to automatically test that some new feature haven't broken model training.

Please implement a test that trains a simple model on synthetic dataset or MNIST (preferred) and test that it converges after a certain number of steps (i.e. loss is below some threshold).

Back in the day, Maxim implemented a basic MNIST training example that could come in handy.
You can find it here: https://gist.github.com/justheuristic/01d5ffe9c534d90e40badff35653ba7d

Recommendations:

  • Test should be easy to run, preferably a single command like this test;
  • Make sure tests can be run without GPU so that we can run them automatically for pull-requests;
  • When applicable, use test_utils module;
@justheuristic justheuristic added the enhancement New feature or request label Mar 2, 2020
@justheuristic
Copy link
Member Author

Merged into #2

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants