-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Compare performance of brmp (pyro/numpyro) and brms #12
Comments
Including both CPU and GPU (#14). |
@null-a : We were running some benchmarks here - pyro-ppl/numpyro#470, and are mostly concluding that right now. From what we can see, it seems very likely that you will observe good performance on all your models. Let us know if there are any particular models that you would like to test out, or have observed some things to be slower than expected. We are also going to be doing a minor release tomorrow which should fix the memory issue, and the issue of model code being recompiled when we run MCMC. For the latter, recompilation can be avoided by simply calling |
No, not yet. (Now #79)
FWIW, the use case I have involves repeatedly performing inference on a model every time a new data point arrives, so it sounds like this particular case might not benefit from caching. |
This is tricky... I think the proposal from @neerajprad to embed the current data into a holder and provide data_size information to mask out data will work here. The pattern will be something like
and for the model
WDYT about it? I think this will work for regression cases, but caching won't work if there is a latent variable having shape depend on Probably this will be easier if XLA supports caching with dynamic data size in the future. |
No description provided.
The text was updated successfully, but these errors were encountered: