You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Could we make an official wrapper implementation on the gaussian processes to allow for a nonstationary kernel that can be adjusted in space locally?
Some time ago I remember I did something similar on GPytorch with the deep kernel learning KISS-GP. It would be really nice to have this feature in pyro, since it is very useful when there is streaming changing data with heteroskedasticity.
The main ideea is to optimize and change the parameters of the main GP while it is running to allow for a local view in the fitting process.
Do I understand correctly that you are building a wrapper library of pyro.contrib.gp in another repository and want official endorsement from the Pyro project? If so, then sure, please submit a PR adding an informative and not pushy link 😉 from the pyro.contrib.gp docs that points to your repo/page. If you're asking whether we'd host you code in this repo, I must warn you that your development velocity will be higher in an external repo.
Do I understand correctly that you are building a wrapper library of pyro.contrib.gp in another repository and want official endorsement from the Pyro project? If so, then sure, please submit a PR adding an informative and not pushy link 😉 from the pyro.contrib.gp docs that points to your repo/page. If you're asking whether we'd host you code in this repo, I must warn you that your development velocity will be higher in an external repo.
Yeah I meant that at the time I've built something top level, as a one time application (it was for my masters project). It was not a library level implementation.
I just wanted to point out that this would be a very useful feature, since also at the time I saw no library that had it out of the box and it seemed a bit odd given the applicability of NSGP.
With that said I would also be willing to contribute implementing it, why not.
Could we make an official wrapper implementation on the gaussian processes to allow for a nonstationary kernel that can be adjusted in space locally?
Some time ago I remember I did something similar on GPytorch with the deep kernel learning KISS-GP. It would be really nice to have this feature in pyro, since it is very useful when there is streaming changing data with heteroskedasticity.
The main ideea is to optimize and change the parameters of the main GP while it is running to allow for a local view in the fitting process.
I recall that there were multiple methods to achieve this, for reference:
https://arxiv.org/abs/2305.19242
https://arxiv.org/abs/2306.01263
https://arxiv.org/abs/1912.11713
The text was updated successfully, but these errors were encountered: