You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Operating System: Ubuntu 14.04.4
CPU: 4x Intel Xeon 24 core processors
Python version: 2.7.6
lightgbm version 2.1.2 installed via pip
When calling predict or predict_proba on an LGBMClassifier model, it seems like the number of threads used corresponds to the number of threads most recently used when training a lightgbm model. Is there a way to change the number of threads used during evaluation? I've tried setting the model.num_threads attribute directly as well as calling model.set_params(num_threads=4) but neither seem to affect the number of threads used, while I seem to be able to influence the number of threads used when evaluating model A by training a new model B with a different number of threads.
For example:
import lightgbm as lgb
import numpy as np
x = np.random.random((100000, 1000))
y = np.random.randint(0, 2, size=100000)
m1 = lgb.LGBMClassifier(num_threads=12)
m1.fit(x, y) # 12 threads used
for _ in range(10):
preds = m1.predict(x) # 12 threads used
m2 = lgb.LGBMClassifier(num_threads=24)
m2.fit(x, y) # 24 threads used
for _ in range(10):
preds = m1.predict(x) # 24 threads used!
It seems like the issue could be resolved by calling omp_set_num_threads with the appropriate argument in the c api somewhere before the model is actually evaluated.
The text was updated successfully, but these errors were encountered:
Environment info
Operating System: Ubuntu 14.04.4
CPU: 4x Intel Xeon 24 core processors
Python version: 2.7.6
lightgbm version 2.1.2 installed via pip
When calling predict or predict_proba on an LGBMClassifier model, it seems like the number of threads used corresponds to the number of threads most recently used when training a lightgbm model. Is there a way to change the number of threads used during evaluation? I've tried setting the model.num_threads attribute directly as well as calling model.set_params(num_threads=4) but neither seem to affect the number of threads used, while I seem to be able to influence the number of threads used when evaluating model A by training a new model B with a different number of threads.
For example:
It seems like the issue could be resolved by calling omp_set_num_threads with the appropriate argument in the c api somewhere before the model is actually evaluated.
The text was updated successfully, but these errors were encountered: