Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[CUDA] Initial work for boosting and evaluation with CUDA #5279

Merged
merged 6 commits into from
Jul 29, 2022

Conversation

shiyu1994
Copy link
Collaborator

@shiyu1994 shiyu1994 commented Jun 10, 2022

This PR initiates adding boosting objectives and evaluation metrics for cuda_exp (see #5163). This PR only creates interfaces for CUDA objectives and metrics, and default back to boosting and evaluation on CPU.

@shiyu1994
Copy link
Collaborator Author

shiyu1994 commented Jun 22, 2022

@guolinke @jameslamb @StrikerRUS Could you please kindly help to review this PR? After merging this one, we can start adding more objectives for cuda_exp. That will make the single-GPU version of cuda_exp complete. Thanks a lot~

namespace LightGBM {

CUDAScoreUpdater::CUDAScoreUpdater(const Dataset* data, int num_tree_per_iteration, const bool boosting_on_cuda):
ScoreUpdater(data, num_tree_per_iteration), num_threads_per_block_(1024), boosting_on_cuda_(boosting_on_cuda) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is boosting_on_cuda always true? should we keep this variable?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No. This is a work around when there are some objective functions not implemented by the CUDA version yet. See line 98 of gbdt.cpp in this PR.

std::vector<score_t> host_gradients_;
/*! \brief hessians on CPU */
std::vector<score_t> host_hessians_;
#endif // DEBUG
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

are these codes used?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. These are used in DEBUG mode to check that the split of data and the split of histogram is consistent. See the CheckSplitValid method in cuda_single_gpu_tree_learner.cpp.

@@ -1045,11 +1045,11 @@ __global__ void AddPredictionToScoreKernel(
const data_size_t global_data_index = data_indices_in_leaf[local_data_index];
const int leaf_index = cuda_data_index_to_leaf_index[global_data_index];
const double leaf_prediction_value = leaf_value[leaf_index];
cuda_scores[local_data_index] = leaf_prediction_value;
cuda_scores[global_data_index] += leaf_prediction_value;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is bug ?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No. When USE_BAGGING is false, data_indices_in_leaf will be all the data indices of training data. So local_data_index is exactly global_data_index.

}
#endif // USE_CUDA_EXP
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we reduce these duplicated codes? like use predefined macros.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the next stage, we need to merge the CUDA version of the metrics one by one. I think writing explicitly for each metric and for both cuda_exp and other versions is more convenient for the next stage.

@StrikerRUS
Copy link
Collaborator

That will make the single-GPU version of cuda_exp complete.

I think you forgot about

@jameslamb
Copy link
Collaborator

please kindly help to review this PR

Before we put any time into this PR, can you please help with #5287 (comment)?

@shiyu1994
Copy link
Collaborator Author

That will make the single-GPU version of cuda_exp complete.

I think you forgot about

Thanks for the reminder. Definitely we also need those features to make our new CUDA version complete.

@shiyu1994
Copy link
Collaborator Author

Close and reopen to trigger the ci.

@shiyu1994 shiyu1994 closed this Jul 27, 2022
@shiyu1994 shiyu1994 reopened this Jul 27, 2022
@shiyu1994
Copy link
Collaborator Author

Close and reopen to trigger ci tests.

@shiyu1994 shiyu1994 closed this Jul 29, 2022
@shiyu1994 shiyu1994 reopened this Jul 29, 2022
Copy link
Collaborator

@guolinke guolinke left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@shiyu1994 shiyu1994 merged commit e0af160 into master Jul 29, 2022
@shiyu1994 shiyu1994 deleted the shiyu-cuda-obj branch July 29, 2022 06:23
shiyu1994 added a commit that referenced this pull request Aug 25, 2022
shiyu1994 added a commit that referenced this pull request Aug 29, 2022
* fix cuda_exp ci

* fix ci failures introduced by #5279

* cleanup cuda.yml

* fix test.sh

* clean up test.sh

* clean up test.sh

* skip lines by cuda_exp in test_register_logger

* Update tests/python_package_test/test_utilities.py

Co-authored-by: Nikita Titov <[email protected]>

Co-authored-by: Nikita Titov <[email protected]>
@jameslamb jameslamb mentioned this pull request Oct 7, 2022
40 tasks
@github-actions
Copy link

This pull request has been automatically locked since there has not been any recent activity since it was closed. To start a new related discussion, open a new issue at https://github.com/microsoft/LightGBM/issues including a reference to this.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Aug 19, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants