Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

v4 refactor for GP module #5055

Merged
merged 36 commits into from
Nov 21, 2021
Merged

v4 refactor for GP module #5055

merged 36 commits into from
Nov 21, 2021

Conversation

bwengals
Copy link
Contributor

@bwengals bwengals commented Oct 7, 2021

This PR has updates for getting the tests passing for v4. Ref issue #5035.

A few other minor changes included (will update this list):

  • np.int -> int, get rid of a numpy deprecation warning
  • remove is_observed option from gp.Marginal. Should always be True.
  • In constructors like gp.Latent, make mean_func and cov_func required kwargs.

Related PRs in pymc/pymc-examples

@codecov
Copy link

codecov bot commented Oct 7, 2021

Codecov Report

Merging #5055 (c2b850d) into main (64d8396) will increase coverage by 1.05%.
The diff coverage is 92.56%.

❗ Current head c2b850d differs from pull request most recent head 461d26b. Consider uploading reports for the commit 461d26b to get more accurate results
Impacted file tree graph

@@            Coverage Diff             @@
##             main    #5055      +/-   ##
==========================================
+ Coverage   77.95%   79.01%   +1.05%     
==========================================
  Files          88       88              
  Lines       14222    14242      +20     
==========================================
+ Hits        11087    11253     +166     
+ Misses       3135     2989     -146     
Impacted Files Coverage Δ
pymc/gp/cov.py 98.07% <50.00%> (ø)
pymc/gp/gp.py 93.60% <92.13%> (+33.35%) ⬆️
pymc/gp/util.py 94.68% <96.66%> (+14.68%) ⬆️
pymc/distributions/multivariate.py 71.57% <0.00%> (-0.15%) ⬇️
pymc/bart/bart.py 95.91% <0.00%> (+0.46%) ⬆️
pymc/math.py 68.68% <0.00%> (+0.50%) ⬆️
pymc/backends/report.py 91.60% <0.00%> (+2.09%) ⬆️

… at 1e-6. add deprecation warning for is_observed arg
- use model.logp instead of variable.logp
- set req kwargs cov_func and mean_func
- fix weirdly small scale on some input X, y
- move predict calls into model block
- the two kron models outstanding
@bwengals
Copy link
Contributor Author

Tests are passing now except for something windowsy. Undefined reference to 'dgemm_'? Anyone know why that might be?

pymc/gp/gp.py Outdated Show resolved Hide resolved
pymc/gp/gp.py Outdated Show resolved Hide resolved
pymc/gp/gp.py Outdated Show resolved Hide resolved
@bwengals
Copy link
Contributor Author

thank you @aloctavodia, made the changes. There are a few other DeprecationWarnings in here from a while back, like here. When should something like that be removed?

@aloctavodia
Copy link
Member

That's seems to be very old. I think is OK to remove most deprecation/future warnings. Given that we are working on a major release it make sense to keep only those deprecation/future messages related to changes from 3.X to 4.0.

@twiecki
Copy link
Member

twiecki commented Oct 30, 2021

Yes, agree, we should remove everything that has a deprecation/future warning now (that wasn't introduced in this release).

@ricardoV94 ricardoV94 modified the milestones: v4.0.0, v4.0.0-beta2 Nov 5, 2021
@bwengals bwengals marked this pull request as ready for review November 8, 2021 02:34
@bwengals
Copy link
Contributor Author

bwengals commented Nov 8, 2021

So I think its ready for a look. There's a couple things I want to flag for the reviewers:

  • Arguments for all implementations are required kwargs, ie, pm.gp.Marginal(mean_func=mean, cov_func=cov), not pm.gp.Marginal(mean, cov)
  • Replacement function for draw_values, pm.gp.util.replace_with_values. Not sure if I'm duplicating functionality here? Or if there is a better way to do this.
  • Added ability to set "jitter", or added diagonal before Cholesky decomp. It used to be hardcoded to 1e-6.
  • There are still quite a few warnings when the tests run that it'd be nice to get rid of, I'll be taking a look through those.

c = a * b

(c_val,) = pm.gp.util.replace_with_values(
[c], replacements={"a": 2, "b": 3, "x": 100}, model=model
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Any reason to base this function on the variables names as opposed to the variables themselves?

Copy link
Contributor Author

@bwengals bwengals Nov 8, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No particular reason, it just struck me as the most natural, but I'm not extremely familiar with Aesara and its usage patterns. I suspect I'm duplicating functionality that's elsewhere here though... Is there a reason that would be preferred? Happy to switch to that.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In general I think we are trying to move away from our dependency on variable names but this is probably never going to happen so... I guess either way.

About duplicating code, if this is a once per sampling kind of thing, I think it is fine.

More importantly tests are passing and we should try to merge soon as it is quite a large PR. We can always come back later for final polishing

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we're trying to move away from variable string names I'll go ahead and change it. Out of curiosity, what's the benefit for doing so?

Also, for gp.predict, it's not even once per sample, it's just once. You'd make your GP model, take the MAP estimate say, and then do something like:

with model:
    mu, cov = gp.predict(X, point=map_estimate)

# plot prediced mean
plt.plot(mu)
# +- 2 sigma uncertainty
plt.plot(mu + 2*np.sqrt(np.diag(cov)), 'b')
plt.plot(mu - 2*np.sqrt(np.diag(cov)), 'b')

Should replace_with_values live in gp.util? Or should it move to somewhere general, like aesaraf? There isn't anything necessarily GP specific about it -- it could be of general use.

Sounds good, and yes sorry that it is a large PR! I'm happy to talk through anything too.

Copy link
Member

@ricardoV94 ricardoV94 Nov 8, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we're trying to move away from variable string names I'll go ahead and change it. Out of curiosity, what's the benefit for doing so?

It makes somethings easier like graph manipulations or variable inspection (e.g, you don't need access to the model object to know what you are dealing with). Also we don't need to worry about the transformed names which are another nuisance in a lot of the codebase.

Less important perhaps there is no real limitation at the Aesara level that variables must have unique names.

Also, for gp.predict, it's not even once per sample, it's just once. You'd make your GP model, take the MAP estimate say, and then do something like:

with model:
    mu, cov = gp.predict(X, point=map_estimate)

# plot prediced mean
plt.plot(mu)
# +- 2 sigma uncertainty
plt.plot(mu + 2*np.sqrt(np.diag(cov)), 'b')
plt.plot(mu - 2*np.sqrt(np.diag(cov)), 'b')

Should replace_with_values live in gp.util? Or should it move to somewhere general, like aesaraf? There isn't anything necessarily GP specific about it -- it could be of general use.

Is there a case were you might want to evaluate for more than a single point? Is it conceptually similar to posterior predictive sampling?

If so you may want to check if any of that code in sample_posterior_predictive may make more sense here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It makes somethings easier like graph manipulations or variable inspection (e.g, you don't need access to the model object to know what you are dealing with). Also we don't need to worry about the transformed names which are another nuisance in a lot of the codebase.

Gotcha, that makes a lot of sense, especially the transformed/untransformed distinction -- ideally that should stay under the hood.

Is there a case were you might want to evaluate for more than a single point? Is it conceptually similar to posterior predictive sampling?

Never say never, but I doubt it. gp.predict is really just a convenience method for getting the conditional mean and variance given the MAP estimate. I actually did start with sample_posterior_predictive to figure out how to make this function, since that's where I'd stolen draw_values from originally! But it looked like draw_values had been refactored into component pieces.

@ricardoV94
Copy link
Member

Do you mind adding release notes with the important changes (kwargs only, new utils, etc)?

RELEASE-NOTES.md Outdated
@@ -45,8 +45,14 @@ All of the above apply to:
- Changes to the BART implementation:
- A BART variable can be combined with other random variables. The `inv_link` argument has been removed (see [4914](https://github.com/pymc-devs/pymc3/pull/4914)).
- Moved BART to its own module (see [5058](https://github.com/pymc-devs/pymc3/pull/5058)).
- ...

- Changes to the Gaussian Process (GP) submodule:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add a link to the PR in this line?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

pushed

pymc/gp/gp.py Show resolved Hide resolved
Copy link
Member

@ricardoV94 ricardoV94 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. I am not very familiar with the GP module so I am trusting you and the tests as well :)

@bwengals
Copy link
Contributor Author

Alright, thank you for the review @ricardoV94! I'll leave it unmerged for a few days just in case someone wants to add anything

with pm.Model() as model:
cov_func = pm.gp.cov.ExpQuad(3, [0.1, 0.2, 0.3])
mean_func = pm.gp.mean.Constant(0.5)
gp = pm.gp.Marginal(mean_func, cov_func)
gp = pm.gp.Marginal(mean_func=mean_func, cov_func=cov_func)
f = gp.marginal_likelihood("f", X, y, noise=0.0, is_observed=False, observed=y)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If is_observed is deprecated, should this be removed?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, thank you for finding these! took them out

npt.assert_allclose(mu1, mu2, atol=0, rtol=1e-3)
npt.assert_allclose(var1, var2, atol=0, rtol=1e-3)

def testPredictCov(self):
with pm.Model() as model:
cov_func = pm.gp.cov.ExpQuad(3, [0.1, 0.2, 0.3])
mean_func = pm.gp.mean.Constant(0.5)
gp = pm.gp.MarginalSparse(mean_func, cov_func, approx="DTC")
gp = pm.gp.MarginalSparse(mean_func=mean_func, cov_func=cov_func, approx="DTC")
f = gp.marginal_likelihood("f", self.X, self.X, self.y, self.sigma, is_observed=False)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another is_observed

RELEASE-NOTES.md Outdated Show resolved Hide resolved
@twiecki twiecki merged commit 64c1464 into pymc-devs:main Nov 21, 2021
@twiecki
Copy link
Member

twiecki commented Nov 21, 2021

Thanks @bwengals, awesome to have this ported!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
GP Gaussian Process v4
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants