Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Stop rbf_kernel_grad and rbf_kernel_gradgrad creating the full covariance matrix unnecessarily #2388

Conversation

douglas-boubert
Copy link
Contributor

The master implementation creates a PyTorch tensor of zeros to store the full covariance matrix, even if only the diagonal elements are needed.

If diag=True then the (potentially very large) tensor K is created but never used. This can result in a GPU running out of memory in an avoidable situation.

I have moved the creation of the tensor to store the full covariance matrix into the if diag=False branch of the code.

Copy link
Collaborator

@Balandat Balandat left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch! Always great to allocate a bunch of memory even if it's never needed... Thanks for the fix!

@Balandat Balandat merged commit 8979210 into cornellius-gp:master Aug 5, 2023
6 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants