-
Notifications
You must be signed in to change notification settings - Fork 24
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use dtype dependent precision #844
base: main
Are you sure you want to change the base?
Conversation
It would be very cool to have float32 support that "just works". I would expect that you will run into a couple more issues. In 653d6f1 I'm now running the test suite on a float32 dataset. This actually looks pretty good, it's just that on the inference side, we're still expecting doubles in a lot of places.
|
This is an example fix for one of the mistakes causing the errors on @jtilly's branch. --- a/src/glum/_glm.py
+++ b/src/glum/_glm.py
@@ -2128,7 +2128,7 @@ class GeneralizedLinearRegressorBase(BaseEstimator, RegressorMixin):
)
if (
- np.linalg.cond(_safe_toarray(X.sandwich(np.ones(X.shape[0]))))
+ np.linalg.cond(_safe_toarray(X.sandwich(np.ones(X.shape[0], dtype=X.dtype))))
> 1 / sys.float_info.epsilon**2
):
raise np.linalg.LinAlgError( There are a bunch of similar ones in the functions used for calculating the covariance matrix. |
I think there are also quite some "Kinderkrankheiten" that are not covered by the tests. E.g., if run on "real data",
and
probably due to fixed convergence tolerances. Setting |
Yes, this is a bit of a rabbit hole. We looked into this when we built I think we'll also have to do a bit of work in
Works fine with Edit: reproducer here: https://github.com/Quantco/tabmat/compare/test-float32?expand=1 |
I'm having issues finding an |
Two questions about the convergence criteria:
Do you have a reference on how to improve convergence? For reasonable
|
xref #843