Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dataset construction uses all threads on the machine #5124

Closed
Tracked by #5153
ivannp opened this issue Apr 3, 2022 · 5 comments
Closed
Tracked by #5153

Dataset construction uses all threads on the machine #5124

ivannp opened this issue Apr 3, 2022 · 5 comments
Labels

Comments

@ivannp
Copy link

ivannp commented Apr 3, 2022

Description

Passing nthreads to lightgbm.Dataset constructor (via the params parameter) doesn't seem to be taken into account. construct seems to use all cores on the machine in some phases. I would expect construct to be bound by the maximum number of threads specified.

Reproducible example

Loading large dataset via a hand-crafted Sequence object.

Environment info

LightGBM version or commit hash: 3.2.1

@jameslamb
Copy link
Collaborator

Thanks for using LightGBM! We need some more information from you before we can help.

  1. Are you able to provide a minimal, reproducible example that demonstrates this behavior?
    • "Loading large dataset via a hand-crafted Sequence object" is not sufficient information for maintainers here to understand what you did and offer a suggestion without significant guessing.
  2. Can you please provide some of the other information that was requested in the issue template when you clicked "new issue"? Like:
    • what programming language are you using?
    • how did you install LightGBM?
  3. Can you try to install the latest version of LightGBM from source in this repo, or at least the latest released version (v3.3.2), and let us know if you still see this behavior?

@StrikerRUS
Copy link
Collaborator

I think this issue and #4598 have a same root cause.

@jameslamb
Copy link
Collaborator

Investigating #4598, I found substantial evidence that passing num_threads through Dataset parameters should correctly result in changing the number of threads used in Dataset construction: #4598 (comment).

I really think we need a reproducible example to be able to investigate this report further. Otherwise, solving this conclusively will require significant research and guessing to try to figure out what combination of parameters, LightGBM version, and Python code reproduces this behavior.

@ivannp
Copy link
Author

ivannp commented Apr 23, 2022

#4598 seems to investigate whether or not parallelism is enabled. The intended claim of this issue is that during some stages of the dataset construction ALL threads on the machine are used, ignoring the actual num_threads. The dataset doesn't matter much, it's the behavior of parallelism. At best, I can provide you with a screenshot of htop during the dataset construction.

@jameslamb
Copy link
Collaborator

I believe this is fixed in newer versions of LightGBM. Specifically, I think that #6226 fixed this.

I got a c5a.4xlarge EC2 instance on AWS tonight (16 vCPUs).

Built LightGBM like this:

git clone --recursive https://github.com/microsoft/LightGBM.git
sh build-python.sh bdist_wheel install

Created a fairly expensive Dataset construction task:

  • 10 million rows
  • 100 features
  • no limit on histogram bin sizes (min_data_in_bin = 1)
  • up to 10,000 bins per feature
cat << EOF > make-data.py
import numpy as np

X = np.random.random(size=(1_000_000, 100))
y = np.random.random(size=(X.shape[0],))
np.save("X.npy", X)
np.save("y.npy", y)
EOF

python ./make-data.py
cat << EOF > check-multithreading.py
import lightgbm as lgb
import numpy as np
import time
import os
import sys

X = np.load("X.npy")
y = np.load("y.npy")
ds = lgb.Dataset(
    X,
    y,
    params={
        "verbose": -1,
        "min_data_in_bin": 1,
        "max_bin": 10000
    }
)
tic = time.time()
ds.construct()
toc = time.time()
num_threads = os.environ.get("OMP_NUM_THREADS", None)
print(f"threads: {num_threads} | execution time (s): {round(toc - tic, 3)}")
EOF

Tested with OMP_NUM_THREADS=1...

OMP_NUM_THREADS=1 \
    python ./check-multithreading.py
# threads: 1 | execution time (s): 22.849
image

... and OMP_NUM_THREADS=4 (there are 16 total vCPUs available)

OMP_NUM_THREADS=4 \
    python ./check-multithreading.py
# threads: 4 | execution time (s): 6.156
image

.. and with OMP_NUM_THREADS not set at all

unset OMP_NUM_THREADS

python ./check-multithreading.py
# threads: None | execution time (s): 2.396
image

For completeness, I repeated this same exercise but with with environment variable OMP_NUM_THREADS unset and passing different values to Dataset parameter num_threads... found the same thing.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants