-
Notifications
You must be signed in to change notification settings - Fork 759
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add 'n' parameter to RandomVariable to access sample_n() functions #323
Conversation
interesting proposal. i like it! i wonder if it breaks any of the inference algorithms? my guess is no, because they all work on samples via a random variable's also: the argument |
Good catch on the Re: |
sure, |
Changed from 'n' to 'sample_n'. Improved handling of rv copying.
a5ac146
to
de5c24c
Compare
* use python idiom instead of if x in scale args * let scale arg take tensors * add test for scale with tensor
Change `xrange` to `range` for python3 compatibility.
* Invgamma-Normal Metropolis Hastings * Update invgamma_normal_mh.py * Update invgamma_normal_mh.py
…491) * add implicit_klqp.py * add example; let loss_obj be an argument * fix pep8 * let user pass in custom ratio loss * improve docstrings * generalize ImplicitKLqp's scale arg to be a dict * add simple unit test * tweak example * robustify ratio loss arg; update docstrings
* Fix get_variables for recursive ops * Allow copy of recrusive graphs * Add tests for recursive args in copy / get_variables * Fix pep8 error * extend recursion safety to ancestors, children, descendants, and parents * Also mirror existing test for get_variables
* let session runs work directly on RandomVariable * add unit test * remove use of .value() in examples and docs
* refactor log_*_exp to use tf.reduce_logsumexp * replace all uses to reduce_log*exp
* tease out Laplace into new file laplace.py * replace hessian utility fn with tf.hessians * update laplace to work with MultivariateNormal* * add unit test; update docs * clean up code on pointmass vs normal * revise docs
* hierarchical logistic regression edward * pep8
* add tex/iclr2017.tex * prescribe edward's development version for now * add iclr2017.ipynb; revise snippets
* remove edward/{stats/,models/models.py * remove all model wrapper examples * remove model wrapper docs * remove model wrapper tests * update docs/ * update tests/ * update .travis.yml,setup.py * remove ed.placeholder,ed.MFVI * update edward/criticisms/ * update edward/inferences/
* ppca tutorial * revising ppca tutorial * minor changes post code review
* Add the docker for using the every environment * update Dockerfile and Makefile for gpu environment * Update Makefile for gpu environment * Update Makefile and README.md for specify the gpu environment * Update Makefile for specify the gpu environment * Update README.md for gpu environment * Add Dockerfile and Update Makefile and README.md for cpu environment
* make miscellaneous revisions to doc * force 'from * import *' statements in all code snippets, no 'import *' except getting started * update some examples
…ature/sample_n Rewrite to catch up with a couple months of changes.
@dustinvtran I've finally gotten back to this. I think it's doing what it should, but it'd be good to get another set of eyes on it. We can now pass in a tuple |
Cool! Is there a way to see the diff? The one here says 199 files changed. |
A rebase to master should fix the number of changed files, shouldn't it? |
I'm going to close this and open a new one based on the conjugacy PR #588. |
This is a small PR that adds a keyword argument 'n' to Edward
RandomVariables
that lets them access thetf.contrib.distributions.Distribution.sample_n()
function. This lets us replace (IMO) hacky syntax likeed.Normal(mu=tf.ones([10, 1])*mu_vec, sigma=1.)
withed.Normal(mu=mu_vec, sigma=1., n=10)
, which is (IMO) clearer, very marginally more efficient, and makes doing conjugacy algebra on the graph a little easier in some cases.