You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Are our sparse matvecs performing correctly? (I wasn't getting much signal from the sparse columns at one point)
How can we transform our data or model to handle the large imbalance between not-clicked (97%) and clicked (3%) ads
How can we make L1 regularization work when we pass non-differentiable points
Should we create transformers for the sparsification of text columns? Are there already transformers in place to do some of this that we should be using?
(3): L1 reg won't work with L-BFGS. As mentioned in the discussion for #40 there is the "trick" to use L1 with L-BFGS, or one must use OWL-QN (such as https://pypi.python.org/pypi/PyLBFGS/0.1.3).
You should be able to use regularizer='l2' though?
(4): By "sparsification" do you mean one-hot-encoding or feature hashing type approaches? As it seems you've used feature hashing here? (which is what I've been using for Criteo data too). Sklearn's relevant transformers are OneHotEncoder and FeatureHasher.
I tried out dask-glm on a subset of the criteo data here:
https://gist.github.com/mrocklin/1a1c0b011e187a750a050eb330ac36b2
This used the following:
I suspect that there is still a fair amount to do here to optimize performance and quality of the model
cc @moody-marlin @TomAugspurger @MLnick
The text was updated successfully, but these errors were encountered: