-
Notifications
You must be signed in to change notification settings - Fork 3.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[CUDA] Initial work for boosting and evaluation with CUDA #5279
Conversation
@guolinke @jameslamb @StrikerRUS Could you please kindly help to review this PR? After merging this one, we can start adding more objectives for |
namespace LightGBM { | ||
|
||
CUDAScoreUpdater::CUDAScoreUpdater(const Dataset* data, int num_tree_per_iteration, const bool boosting_on_cuda): | ||
ScoreUpdater(data, num_tree_per_iteration), num_threads_per_block_(1024), boosting_on_cuda_(boosting_on_cuda) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is boosting_on_cuda
always true? should we keep this variable?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No. This is a work around when there are some objective functions not implemented by the CUDA version yet. See line 98 of gbdt.cpp in this PR.
std::vector<score_t> host_gradients_; | ||
/*! \brief hessians on CPU */ | ||
std::vector<score_t> host_hessians_; | ||
#endif // DEBUG |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
are these codes used?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes. These are used in DEBUG mode to check that the split of data and the split of histogram is consistent. See the CheckSplitValid
method in cuda_single_gpu_tree_learner.cpp.
@@ -1045,11 +1045,11 @@ __global__ void AddPredictionToScoreKernel( | |||
const data_size_t global_data_index = data_indices_in_leaf[local_data_index]; | |||
const int leaf_index = cuda_data_index_to_leaf_index[global_data_index]; | |||
const double leaf_prediction_value = leaf_value[leaf_index]; | |||
cuda_scores[local_data_index] = leaf_prediction_value; | |||
cuda_scores[global_data_index] += leaf_prediction_value; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is bug ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No. When USE_BAGGING is false, data_indices_in_leaf
will be all the data indices of training data. So local_data_index
is exactly global_data_index
.
} | ||
#endif // USE_CUDA_EXP |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can we reduce these duplicated codes? like use predefined macros.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the next stage, we need to merge the CUDA version of the metrics one by one. I think writing explicitly for each metric and for both cuda_exp
and other versions is more convenient for the next stage.
I think you forgot about |
Before we put any time into this PR, can you please help with #5287 (comment)? |
Thanks for the reminder. Definitely we also need those features to make our new CUDA version complete. |
Close and reopen to trigger the ci. |
Close and reopen to trigger ci tests. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
* fix cuda_exp ci * fix ci failures introduced by #5279 * cleanup cuda.yml * fix test.sh * clean up test.sh * clean up test.sh * skip lines by cuda_exp in test_register_logger * Update tests/python_package_test/test_utilities.py Co-authored-by: Nikita Titov <[email protected]> Co-authored-by: Nikita Titov <[email protected]>
This pull request has been automatically locked since there has not been any recent activity since it was closed. To start a new related discussion, open a new issue at https://github.com/microsoft/LightGBM/issues including a reference to this. |
This PR initiates adding boosting objectives and evaluation metrics for
cuda_exp
(see #5163). This PR only creates interfaces for CUDA objectives and metrics, and default back to boosting and evaluation on CPU.