Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Change type of hist buffer to float #5624

Merged
merged 6 commits into from
Jun 3, 2020

Conversation

ShvetsKS
Copy link
Contributor

@ShvetsKS ShvetsKS commented May 1, 2020

copy of #5523

This PR is supposed to change internal type of GradStats to avoid conversion for hist method. Also bug related to numerical instability in NeedReplace method.

Accuracy ~ the same as was with double type, and ex. for mnist data set was even improved:

mnist log-loss
Master 0.07304085
This PR 0.07301824

Similar changes were provided in PR:#4529 but full scope of changes was reverted in #5008.

Performance improvements (1.5x for BuildHist):

santander full train InitData BuildHist SyncHist PredictRaw
Master 179.71 47.24 58.01 14 46.48
This PR 162.94 47.51 38.12 8.08 54.7
mnist full train BuildHist SyncHist ApplySplit
Master 78.511 34.82 23.29 1.28
This PR 62.92 22.98 9.89 1.14

@ShvetsKS
Copy link
Contributor Author

ShvetsKS commented May 1, 2020

@trivialfis GradientPair is used now. Thanks :)
Check accuracy on covertype data set with float32 is in progress.

@ShvetsKS
Copy link
Contributor Author

ShvetsKS commented May 1, 2020

covertype data set:

max_depth = 8, n_estimators = 500 test-merror train-merror
Master 0.06005 0.03634
This PR 0.05926 0.03496
max_depth = 6, n_estimators = 500 test-merror train-merror
Master 0.10518 0.08969
This PR 0.10491 0.09000

@ShvetsKS ShvetsKS marked this pull request as ready for review May 1, 2020 11:35
@RAMitchell
Copy link
Member

Changing accumulators from double to float is a risky business. Testing on one dataset is not enough. What if you generate a large number of synthetic labels with bad conditioning for summation and then test the accuracy? Lets go for the worst case and see what the limits of single precision is.

@ShvetsKS
Copy link
Contributor Author

ShvetsKS commented May 6, 2020

@RAMitchell single precision is optional parameter as for GPU implementation.

@ShvetsKS ShvetsKS force-pushed the gradstat_change_type_d2 branch 2 times, most recently from ae56100 to 76f8338 Compare May 12, 2020 13:34
@ShvetsKS
Copy link
Contributor Author

@trivialfis , @RAMitchell single precision was added as optional for histogram build stage, tests were extended.

@@ -141,6 +141,25 @@ class GradientPairInternal {
public:
using ValueT = T;

inline void Add(const GradientPairInternal& b) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can these be operator+= ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes. fixed

#else // defined(XGBOOST_STRICT_R_MODE) && XGBOOST_STRICT_R_MODE == 1
memset(hist.data() + begin, '\0', (end-begin)*sizeof(tree::GradStats));
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we use std::fill consistently?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we have to consider it separately as performance can be affected.
#5579

struct BuilderMock : public QuantileHistMaker::Builder {
using RealImpl = QuantileHistMaker::Builder;
template <typename GradientSumT>
struct BuilderMock : public QuantileHistMaker::Builder<GradientSumT> {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I initiated this test mock back then because quantile hist was an integrated class with very little outside dependency. Now it's growing thanks to your contributions. ;-) You can consider splitting it up in the future.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok. But now only type was added :)

if (!float_builder_ || param_.subsample < 1.0f) {
return false;
} else {
return float_builder_->UpdatePredictionCache(data, out_preds);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Indent.

Copy link
Contributor Author

@ShvetsKS ShvetsKS May 18, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed.
and the same below also fixed

@@ -35,14 +35,18 @@ namespace tree {

DMLC_REGISTRY_FILE_TAG(updater_quantile_hist);

#if !defined(GTEST_TEST)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why? Is this file included in test?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems no. It was deleted.

I considered GPU implementation of this parameter as a reference:
https://github.com/dmlc/xgboost/blob/master/src/tree/updater_gpu_hist.cu#L37

@@ -138,6 +138,15 @@ class ElasticNet final : public SplitEvaluator {
return 0.0;
}
}
inline float ThresholdL1(float g) const {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Where is this used?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

seems nowhere. Thanks, it was related to old changes when float was used in GradStats

@@ -1090,13 +1127,27 @@ void GHistBuilder::BuildBlockHist(const std::vector<GradientPair>& gpair,
const GradientPair stat = gpair[rid];
for (size_t j = ibegin; j < iend; ++j) {
const uint32_t bin = gmat.index[j];
p_hist[bin].Add(stat);
p_hist[bin].Add(stat.GetGrad(), stat.GetHess());
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not get this into GradientPair with operator+=?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no possibility to call GradientPairInternal<float>::operator+=(const GradientPairInternal<double>&)

@trivialfis
Copy link
Member

Sorry for the long wait. Will try to be more active on PR reviewing.

@ShvetsKS ShvetsKS requested a review from trivialfis May 18, 2020 15:09
Copy link
Contributor

@SmirnovEgorRu SmirnovEgorRu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall, an approach to handle float/double hist types is similar to what I see for GPU. So, there are no major comments to the code structure, only small possible improvements in comments.

But there is one major comment about the documentation to be fixed.

doc/parameter.rst Show resolved Hide resolved
include/xgboost/base.h Show resolved Hide resolved
src/common/hist_util.h Outdated Show resolved Hide resolved
src/tree/param.h Show resolved Hide resolved
src/tree/updater_quantile_hist.cc Outdated Show resolved Hide resolved
src/tree/updater_quantile_hist.cc Outdated Show resolved Hide resolved
src/tree/updater_quantile_hist.h Outdated Show resolved Hide resolved
tests/python/test_with_sklearn.py Outdated Show resolved Hide resolved
doc/parameter.rst Outdated Show resolved Hide resolved
@codecov-commenter
Copy link

codecov-commenter commented May 19, 2020

Codecov Report

Merging #5624 into master will decrease coverage by 0.24%.
The diff coverage is n/a.

Impacted file tree graph

@@            Coverage Diff             @@
##           master    #5624      +/-   ##
==========================================
- Coverage   82.56%   82.31%   -0.25%     
==========================================
  Files          11       12       +1     
  Lines        2598     2720     +122     
==========================================
+ Hits         2145     2239      +94     
- Misses        453      481      +28     
Impacted Files Coverage Δ
python-package/xgboost/dask.py 83.55% <0.00%> (-0.14%) ⬇️
python-package/xgboost/plotting.py 73.52% <0.00%> (ø)
python-package/xgboost/data.py 73.26% <0.00%> (ø)
python-package/xgboost/sklearn.py 91.31% <0.00%> (+0.02%) ⬆️
python-package/xgboost/compat.py 54.83% <0.00%> (+0.29%) ⬆️
python-package/xgboost/core.py 79.30% <0.00%> (+1.27%) ⬆️
python-package/xgboost/__init__.py 90.00% <0.00%> (+3.63%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update dd01e4b...0774363. Read the comment docs.

src/common/hist_util.h Outdated Show resolved Hide resolved
src/tree/updater_quantile_hist.cc Outdated Show resolved Hide resolved
@SmirnovEgorRu
Copy link
Contributor

@ShvetsKS, I don't have concerns to the code. But here are merge conflicts with master - let's resolve them, I will be able to approve after this.

@ShvetsKS
Copy link
Contributor Author

@ShvetsKS, I don't have concerns to the code. But here are merge conflicts with master - let's resolve them, I will be able to approve after this.

@SmirnovEgorRu Thanks, for review. Merge conflicts were resolved.

@@ -225,12 +225,15 @@ Parameters for Tree Booster
list is a group of indices of features that are allowed to interact with each other.
See tutorial for more information

Additional parameters for `gpu_hist` tree method
Additional parameters for `hist` tree method
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hist And gpu_hist.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed. Thanks

Copy link
Member

@trivialfis trivialfis left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it possible to avoid these warnings:

In file included from /home/fis/Workspace/XGBoost/xgboost/rabit/include/rabit/rabit.h:459,
                 from /home/fis/Workspace/XGBoost/xgboost/src/tree/updater_quantile_hist.cc:8:
/home/fis/Workspace/XGBoost/xgboost/rabit/include/rabit/./internal/rabit-inl.h: In instantiation of ‘void rabit::ReducerSafe_(const void*, void*, int, const MPI::Datatype&) [with DType = xgboost::detail::GradientPairInternal<float>; void (* freduce)(DType&, const DType&) = xgboost::detail::GradientPairInternal<float>::Reduce]’:
/home/fis/Workspace/XGBoost/xgboost/rabit/include/rabit/./internal/rabit-inl.h:286:5:   required from ‘rabit::Reducer<DType, freduce>::Reducer() [with DType = xgboost::detail::GradientPairInternal<float>; void (* freduce)(DType&, const DType&) = xgboost::detail::GradientPairInternal<float>::Reduce]’
/home/fis/Workspace/XGBoost/xgboost/src/tree/./updater_quantile_hist.h:197:49:   required from ‘xgboost::tree::QuantileHistMaker::Builder<GradientSumT>::Builder(const xgboost::tree::TrainParam&, std::unique_ptr<xgboost::TreeUpdater>, std::unique_ptr<xgboost::tree::SplitEvaluator>, xgboost::FeatureInteractionConstraintHost, const xgboost::DMatrix*) [with GradientSumT = float]’
/home/fis/Workspace/XGBoost/xgboost/src/tree/updater_quantile_hist.cc:59:18:   required from ‘void xgboost::tree::QuantileHistMaker::SetBuilder(std::unique_ptr<xgboost::tree::QuantileHistMaker::Builder<GradientSumT> >*, xgboost::DMatrix*) [with GradientSumT = float]’
/home/fis/Workspace/XGBoost/xgboost/src/tree/updater_quantile_hist.cc:104:39:   required from here
/home/fis/Workspace/XGBoost/xgboost/rabit/include/rabit/./internal/rabit-inl.h:264:16: warning: ‘void* memcpy(void*, const void*, size_t)’ writing to an object of non-trivially copyable type ‘class xgboost::detail::GradientPairInternal<float>’; use copy-assignment or copy-initialization instead [-Wclass-memaccess]
     std::memcpy(&tdst, pdst + (i * kUnit), sizeof(tdst));
     ~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In file included from /home/fis/Workspace/XGBoost/xgboost/include/xgboost/logging.h:14,
                 from /home/fis/Workspace/XGBoost/xgboost/src/tree/updater_quantile_hist.cc:20:
/home/fis/Workspace/XGBoost/xgboost/include/xgboost/base.h:132:7: note: ‘class xgboost::detail::GradientPairInternal<float>’ declared here
 class GradientPairInternal {
       ^~~~~~~~~~~~~~~~~~~~
In file included from /home/fis/Workspace/XGBoost/xgboost/rabit/include/rabit/rabit.h:459,
                 from /home/fis/Workspace/XGBoost/xgboost/src/tree/updater_quantile_hist.cc:8:
/home/fis/Workspace/XGBoost/xgboost/rabit/include/rabit/./internal/rabit-inl.h:265:16: warning: ‘void* memcpy(void*, const void*, size_t)’ writing to an object of non-trivially copyable type ‘class xgboost::detail::GradientPairInternal<float>’; use copy-assignment or copy-initialization instead [-Wclass-memaccess]
     std::memcpy(&tsrc, psrc + (i * kUnit), sizeof(tsrc));
     ~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In file included from /home/fis/Workspace/XGBoost/xgboost/include/xgboost/logging.h:14,
                 from /home/fis/Workspace/XGBoost/xgboost/src/tree/updater_quantile_hist.cc:20:
/home/fis/Workspace/XGBoost/xgboost/include/xgboost/base.h:132:7: note: ‘class xgboost::detail::GradientPairInternal<float>’ declared here
 class GradientPairInternal {
       ^~~~~~~~~~~~~~~~~~~~
/home/fis/Workspace/XGBoost/xgboost/src/tree/updater_quantile_hist.cc: In instantiation of ‘void xgboost::tree::QuantileHistMaker::Builder<GradientSumT>::InitNewNode(int, const xgboost::common::GHistIndexMatrix&, const std::vector<xgboost::detail::GradientPairInternal<float> >&, const xgboost::DMatrix&, const xgboost::RegTree&) [with GradientSumT = float]’:
/home/fis/Workspace/XGBoost/xgboost/src/tree/updater_quantile_hist.cc:536:9:   required from ‘void xgboost::tree::QuantileHistMaker::Builder<GradientSumT>::ExpandWithLossGuide(const xgboost::common::GHistIndexMatrix&, const xgboost::common::GHistIndexBlockMatrix&, const xgboost::common::ColumnMatrix&, xgboost::DMatrix*, xgboost::RegTree*, const std::vector<xgboost::detail::GradientPairInternal<float> >&) [with GradientSumT = float]’
/home/fis/Workspace/XGBoost/xgboost/src/tree/updater_quantile_hist.cc:613:5:   required from ‘void xgboost::tree::QuantileHistMaker::Builder<GradientSumT>::Update(const xgboost::common::GHistIndexMatrix&, const xgboost::common::GHistIndexBlockMatrix&, const xgboost::common::ColumnMatrix&, xgboost::HostDeviceVector<xgboost::detail::GradientPairInternal<float> >*, xgboost::DMatrix*, xgboost::RegTree*) [with GradientSumT = float]’
/home/fis/Workspace/XGBoost/xgboost/src/tree/updater_quantile_hist.cc:79:5:   required from ‘void xgboost::tree::QuantileHistMaker::CallBuilderUpdate(const std::unique_ptr<xgboost::tree::QuantileHistMaker::Builder<GradientSumT> >&, xgboost::HostDeviceVector<xgboost::detail::GradientPairInternal<float> >*, xgboost::DMatrix*, const std::vector<xgboost::RegTree*>&) [with GradientSumT = float]’
/home/fis/Workspace/XGBoost/xgboost/src/tree/updater_quantile_hist.cc:106:57:   required from here
/home/fis/Workspace/XGBoost/xgboost/src/tree/updater_quantile_hist.cc:1241:11: warning: unused variable ‘stats’ [-Wunused-variable]
     auto& stats = snode_[nid].stats;
           ^~~~~
In file included from /home/fis/Workspace/XGBoost/xgboost/rabit/include/rabit/rabit.h:459,
                 from /home/fis/Workspace/XGBoost/xgboost/src/tree/updater_quantile_hist.cc:8:
/home/fis/Workspace/XGBoost/xgboost/rabit/include/rabit/./internal/rabit-inl.h: In instantiation of ‘void rabit::ReducerSafe_(const void*, void*, int, const MPI::Datatype&) [with DType = xgboost::detail::GradientPairInternal<double>; void (* freduce)(DType&, const DType&) = xgboost::detail::GradientPairInternal<double>::Reduce]’:
/home/fis/Workspace/XGBoost/xgboost/rabit/include/rabit/./internal/rabit-inl.h:286:5:   required from ‘rabit::Reducer<DType, freduce>::Reducer() [with DType = xgboost::detail::GradientPairInternal<double>; void (* freduce)(DType&, const DType&) = xgboost::detail::GradientPairInternal<double>::Reduce]’
/home/fis/Workspace/XGBoost/xgboost/src/tree/./updater_quantile_hist.h:197:49:   required from ‘xgboost::tree::QuantileHistMaker::Builder<GradientSumT>::Builder(const xgboost::tree::TrainParam&, std::unique_ptr<xgboost::TreeUpdater>, std::unique_ptr<xgboost::tree::SplitEvaluator>, xgboost::FeatureInteractionConstraintHost, const xgboost::DMatrix*) [with GradientSumT = double]’
/home/fis/Workspace/XGBoost/xgboost/src/tree/updater_quantile_hist.cc:59:18:   required from ‘void xgboost::tree::QuantileHistMaker::SetBuilder(std::unique_ptr<xgboost::tree::QuantileHistMaker::Builder<GradientSumT> >*, xgboost::DMatrix*) [with GradientSumT = double]’
/home/fis/Workspace/XGBoost/xgboost/src/tree/updater_quantile_hist.cc:109:40:   required from here
/home/fis/Workspace/XGBoost/xgboost/rabit/include/rabit/./internal/rabit-inl.h:264:16: warning: ‘void* memcpy(void*, const void*, size_t)’ writing to an object of non-trivially copyable type ‘class xgboost::detail::GradientPairInternal<double>’; use copy-assignment or copy-initialization instead [-Wclass-memaccess]
     std::memcpy(&tdst, pdst + (i * kUnit), sizeof(tdst));
     ~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In file included from /home/fis/Workspace/XGBoost/xgboost/include/xgboost/logging.h:14,
                 from /home/fis/Workspace/XGBoost/xgboost/src/tree/updater_quantile_hist.cc:20:
/home/fis/Workspace/XGBoost/xgboost/include/xgboost/base.h:132:7: note: ‘class xgboost::detail::GradientPairInternal<double>’ declared here
 class GradientPairInternal {
       ^~~~~~~~~~~~~~~~~~~~
In file included from /home/fis/Workspace/XGBoost/xgboost/rabit/include/rabit/rabit.h:459,
                 from /home/fis/Workspace/XGBoost/xgboost/src/tree/updater_quantile_hist.cc:8:
/home/fis/Workspace/XGBoost/xgboost/rabit/include/rabit/./internal/rabit-inl.h:265:16: warning: ‘void* memcpy(void*, const void*, size_t)’ writing to an object of non-trivially copyable type ‘class xgboost::detail::GradientPairInternal<double>’; use copy-assignment or copy-initialization instead [-Wclass-memaccess]
     std::memcpy(&tsrc, psrc + (i * kUnit), sizeof(tsrc));
     ~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In file included from /home/fis/Workspace/XGBoost/xgboost/include/xgboost/logging.h:14,
                 from /home/fis/Workspace/XGBoost/xgboost/src/tree/updater_quantile_hist.cc:20:
/home/fis/Workspace/XGBoost/xgboost/include/xgboost/base.h:132:7: note: ‘class xgboost::detail::GradientPairInternal<double>’ declared here
 class GradientPairInternal {
       ^~~~~~~~~~~~~~~~~~~~

if (tree[nid].IsRoot()) {
if (data_layout_ == kDenseDataZeroBased || data_layout_ == kDenseDataOneBased) {
const std::vector<uint32_t>& row_ptr = gmat.cut.Ptrs();
const uint32_t ibegin = row_ptr[fid_least_bins_];
const uint32_t iend = row_ptr[fid_least_bins_ + 1];
auto begin = hist.data();
for (uint32_t i = ibegin; i < iend; ++i) {
const GradStats et = begin[i];
stats.Add(et.sum_grad, et.sum_hess);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The stats variable defined in line 1241 is no longer used. Delete it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

deleted. Thanks

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But I don't think that we should fix:
/home/fis/Workspace/XGBoost/xgboost/rabit/include/rabit/./internal/rabit-inl.h:264:16: warning: ‘void* memcpy(void*, const void*, size_t)’ writing to an object of non-trivially copyable type ‘class xgboost::detail::GradientPairInternal<float>’; use copy-assignment or copy-initialization instead [-Wclass-memaccess] std::memcpy(&tdst, pdst + (i * kUnit), sizeof(tdst));

  1. class GradientPairInternal is trivially copyable (only two data fields are presented: grad_, hess_ and it's not a pointers).
  2. and to fix it we should change common ReducerSafe_

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

possibly similar warnings were for GradStats, here
rabit::Reducer<GradStats, GradStats::Reduce> histred_;

there is also two fields only:

/*! \brief sum gradient statistics */
 GradType sum_grad { 0 };
 /*! \brief sum hessian statistics */
 GradType sum_hess { 0 };

@trivialfis
Copy link
Member

Also:

In file included from /home/fis/Workspace/XGBoost/xgboost/src/tree/updater_quantile_hist.cc:7:
In file included from /home/fis/Workspace/XGBoost/xgboost/dmlc-core/include/dmlc/timer.h:21:
In file included from /home/fis/Workspace/XGBoost/xgboost/dmlc-core/include/dmlc/logging.h:15:
In file included from /usr/bin/../lib/gcc/x86_64-linux-gnu/9/../../../../include/c++/9/memory:80:
/usr/bin/../lib/gcc/x86_64-linux-gnu/9/../../../../include/c++/9/bits/unique_ptr.h:81:2: warning: delete called on 'xgboost::tree::HistSynchronizer' that is abstract but has non-virtual destructor [-Wdelete-abstract-non-virtual-dtor]
        delete __ptr;

@ShvetsKS
Copy link
Contributor Author

Also:

In file included from /home/fis/Workspace/XGBoost/xgboost/src/tree/updater_quantile_hist.cc:7:
In file included from /home/fis/Workspace/XGBoost/xgboost/dmlc-core/include/dmlc/timer.h:21:
In file included from /home/fis/Workspace/XGBoost/xgboost/dmlc-core/include/dmlc/logging.h:15:
In file included from /usr/bin/../lib/gcc/x86_64-linux-gnu/9/../../../../include/c++/9/memory:80:
/usr/bin/../lib/gcc/x86_64-linux-gnu/9/../../../../include/c++/9/bits/unique_ptr.h:81:2: warning: delete called on 'xgboost::tree::HistSynchronizer' that is abstract but has non-virtual destructor [-Wdelete-abstract-non-virtual-dtor]
        delete __ptr;

fixed, and for HistRowsAdder also

@ShvetsKS ShvetsKS requested a review from trivialfis May 22, 2020 06:33
@trivialfis
Copy link
Member

Let me take a look on why the warnings appear after this PR.

@trivialfis
Copy link
Member

Sorry but could you please help finding the cause of this warning? It wasn't here before this PR and I would like to know why. I would love to help but I'm sick currently so might not be doing anything non trivial these few days.

@trivialfis
Copy link
Member

These warnings are reproducible with GCC 8

@ShvetsKS
Copy link
Contributor Author

ShvetsKS commented May 24, 2020

These warnings are reproducible with GCC 8

@trivialfis sorry, but I can't reproduce this warnings on my system.

gcc version: 8.4.0
mkdir build
cd build
cmake ..
make -j4

Could you help with reproduction of described warnings?

@trivialfis
Copy link
Member

-Wall.

@ShvetsKS
Copy link
Contributor Author

@trivialfis seems warning writing to an object of non-trivially copyable type was fixed.
no other warning are left, except existed ones for GradStats we have similar for GradientPairInternal:

xgboost/src/common/hist_util.cc:839:9: warning: ‘void* memset(void*, int, size_t)’ clearing an object of non-trivial type ‘class xgboost::detail::GradientPairInternal<float>’; use assignment or value-initialization instead [-Wclass-memaccess]
   memset(hist.data() + begin, '\0', (end-begin)*
   ~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
          sizeof(xgboost::detail::GradientPairInternal<GradientSumT>));

but it was placed under else define
#if defined(XGBOOST_STRICT_R_MODE) && XGBOOST_STRICT_R_MODE == 1

@trivialfis
Copy link
Member

Thanks! Will try it today.

@ShvetsKS
Copy link
Contributor Author

ShvetsKS commented Jun 1, 2020

@trivialfis is there any warnings still that I missed? :)

Copy link
Member

@trivialfis trivialfis left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! Thanks for the hard work on optimization!

@trivialfis trivialfis merged commit cd3d14a into dmlc:master Jun 3, 2020
nyoko added a commit to nyoko/xgboost that referenced this pull request Aug 12, 2020
* [dask] Accept other inputs for prediction. (dmlc#5428)


* Returns a series when input is dataframe.

* Merge assert client.

* [R-package] changed FindLibR to take advantage of CMake cache (dmlc#5427)

* Support pandas SparseArray. (dmlc#5431)

* [R-package] fixed uses of class() (dmlc#5426)

Thank you a lot. Good catch!

* [dask] Fix missing value for scikit-learn interface. (dmlc#5435)

* Ranking metric acceleration on the gpu (dmlc#5398)

* Add link to GPU documentation (dmlc#5437)

* Add Accelerated Failure Time loss for survival analysis task (dmlc#4763)

* [WIP] Add lower and upper bounds on the label for survival analysis

* Update test MetaInfo.SaveLoadBinary to account for extra two fields

* Don't clear qids_ for version 2 of MetaInfo

* Add SetInfo() and GetInfo() method for lower and upper bounds

* changes to aft

* Add parameter class for AFT; use enum's to represent distribution and event type

* Add AFT metric

* changes to neg grad to grad

* changes to binomial loss

* changes to overflow

* changes to eps

* changes to code refactoring

* changes to code refactoring

* changes to code refactoring

* Re-factor survival analysis

* Remove aft namespace

* Move function bodies out of AFTNormal and AFTLogistic, to reduce clutter

* Move function bodies out of AFTLoss, to reduce clutter

* Use smart pointer to store AFTDistribution and AFTLoss

* Rename AFTNoiseDistribution enum to AFTDistributionType for clarity

The enum class was not a distribution itself but a distribution type

* Add AFTDistribution::Create() method for convenience

* changes to extreme distribution

* changes to extreme distribution

* changes to extreme

* changes to extreme distribution

* changes to left censored

* deleted cout

* changes to x,mu and sd and code refactoring

* changes to print

* changes to hessian formula in censored and uncensored

* changes to variable names and pow

* changes to Logistic Pdf

* changes to parameter

* Expose lower and upper bound labels to R package

* Use example weights; normalize log likelihood metric

* changes to CHECK

* changes to logistic hessian to standard formula

* changes to logistic formula

* Comply with coding style guideline

* Revert back Rabit submodule

* Revert dmlc-core submodule

* Comply with coding style guideline (clang-tidy)

* Fix an error in AFTLoss::Gradient()

* Add missing files to amalgamation

* Address @RAMitchell's comment: minimize future change in MetaInfo interface

* Fix lint

* Fix compilation error on 32-bit target, when size_t == bst_uint

* Allocate sufficient memory to hold extra label info

* Use OpenMP to speed up

* Fix compilation on Windows

* Address reviewer's feedback

* Add unit tests for probability distributions

* Make Metric subclass of Configurable

* Address reviewer's feedback: Configure() AFT metric

* Add a dummy test for AFT metric configuration

* Complete AFT configuration test; remove debugging print

* Rename AFT parameters

* Clarify test comment

* Add a dummy test for AFT loss for uncensored case

* Fix a bug in AFT loss for uncensored labels

* Complete unit test for AFT loss metric

* Simplify unit tests for AFT metric

* Add unit test to verify aggregate output from AFT metric

* Use EXPECT_* instead of ASSERT_*, so that we run all unit tests

* Use aft_loss_param when serializing AFTObj

This is to be consistent with AFT metric

* Add unit tests for AFT Objective

* Fix OpenMP bug; clarify semantics for shared variables used in OpenMP loops

* Add comments

* Remove AFT prefix from probability distribution; put probability distribution in separate source file

* Add comments

* Define kPI and kEulerMascheroni in probability_distribution.h

* Add probability_distribution.cc to amalgamation

* Remove unnecessary diff

* Address reviewer's feedback: define variables where they're used

* Eliminate all INFs and NANs from AFT loss and gradient

* Add demo

* Add tutorial

* Fix lint

* Use 'survival:aft' to be consistent with 'survival:cox'

* Move sample data to demo/data

* Add visual demo with 1D toy data

* Add Python tests

Co-authored-by: Philip Cho <[email protected]>

* Force compressed buffer to be 4 bytes aligned. (dmlc#5441)

* Refactor tests with data generator. (dmlc#5439)

* Resolve travis failure. (dmlc#5445)

* Install dependencies by pip.

* Device dmatrix (dmlc#5420)

* Reducing memory consumption for 'hist' method on CPU (dmlc#5334)

* [R-package] fixed inconsistency in R -e calls in FindLibR.cmake (dmlc#5438)

* Thread safe, inplace prediction. (dmlc#5389)

Normal prediction with DMatrix is now thread safe with locks.  Added inplace prediction is lock free thread safe.

When data is on device (cupy, cudf), the returned data is also on device.

* Implementation for numpy, csr, cudf and cupy.

* Implementation for dask.

* Remove sync in simple dmatrix.

* Add support for dlpack, expose python docs for DeviceQuantileDMatrix (dmlc#5465)

* Reduce span check overhead. (dmlc#5464)

* Update dmlc-core. (dmlc#5466)

* Copy dmlc travis script to XGBoost.

* Prevent copying SimpleDMatrix. (dmlc#5453)

* Set default dtor for SimpleDMatrix to initialize default copy ctor, which is
deleted due to unique ptr.

* Remove commented code.
* Remove warning for calling host function (std::max).
* Remove warning for initialization order.
* Remove warning for unused variables.

* Remove silent parameter. (dmlc#5476)

* Enable parameter validation for skl. (dmlc#5477)

* Split up test helpers header. (dmlc#5455)

* Implement host span. (dmlc#5459)

* Accept other gradient types for split entry. (dmlc#5467)

* Implement robust regularization in 'survival:aft' objective (dmlc#5473)

* Robust regularization of AFT gradient and hessian

* Fix AFT doc; expose it to tutorial TOC

* Apply robust regularization to uncensored case too

* Revise unit test slightly

* Fix lint

* Update test_survival.py

* Use GradientPairPrecise

* Remove unused variables

* Fix dump model. (dmlc#5485)

* Small updates to GPU documentation (dmlc#5483)

* Add R code to AFT tutorial [skip ci] (dmlc#5486)

* Upgrade clang-tidy on CI. (dmlc#5469)

* Correct all clang-tidy errors.
* Upgrade clang-tidy to 10 on CI.

Co-authored-by: Hyunsu Cho <[email protected]>

* corrected spelling of 'list' (dmlc#5482)

* Edits on tutorial for XGBoost job on Kubernetes (dmlc#5487)

* add reference to gpu external memory (dmlc#5490)

* Fix out-of-bound array access in WQSummary::SetPrune() (dmlc#5493)

* [jvm-packages]add feature size for LabelPoint and DataBatch (dmlc#5303)

* fix type error

* Validate number of features.

* resolve comments

* add feature size for LabelPoint and DataBatch

* pass the feature size to native

* move feature size validating tests into a separate suite

* resolve comments

Co-authored-by: fis <[email protected]>

* Use ellpack for prediction only when sparsepage doesn't exist. (dmlc#5504)

* Fix checking booster. (dmlc#5505)

* Use `get_params()` instead of `getattr` intrinsic.

* Requires setting leaf stat when expanding tree. (dmlc#5501)

* Fix GPU Hist feature importance.

* Remove distcol updater. (dmlc#5507)

Closes dmlc#5498.

* Unify max nodes. (dmlc#5497)

* Fix github merge. (dmlc#5509)

* Update doc for parameter validation. (dmlc#5508)

* Update doc for parameter validation.

* Fix github rebase.

* Serialise booster after training to reset state (dmlc#5484)

* Serialise booster after training to reset state

* Prevent process_type being set on load

* Check for correct updater sequence

* Remove makefiles. (dmlc#5513)

* [R] R raw serialization. (dmlc#5123)

* Add bindings for serialization.
* Change `xgb.save.raw' into full serialization instead of simple model.
* Add `xgb.load.raw' for unserialization.
* Run devtools.

* [CI] Use devtoolset-6 because devtoolset-4 is EOL and no longer available (dmlc#5506)

* Use devtoolset-6.

* [CI] Use devtoolset-6 because devtoolset-4 is EOL and no longer available

* CUDA 9.0 doesn't work with devtoolset-6; use devtoolset-4 for GPU build only

Co-authored-by: Hyunsu Cho <[email protected]>

* fix typo "customized" (dmlc#5515)

* Ensure that configured dmlc/build_config.h is picked up by Rabit and XGBoost (dmlc#5514)

* Ensure that configured header (build_config.h) from dmlc-core is picked up by Rabit and XGBoost

* Check which Rabit target is being used

* Use CMake 3.13 in all Jenkins tests

* Upgrade CMake in Travis CI

* Install CMake using Kitware installer

* Remove existing CMake (3.12.4)

* Update Python doc. [skip ci] (dmlc#5517)

* Update doc for copying booster. [skip ci]

The issue is resolved in  dmlc#5312 .

* Add version for new APIs. [skip ci]

* Add Neptune and Optuna to list of examples (dmlc#5528)

* [jvm-packages] [CI] Create a Maven repository to host SNAPSHOT JARs (dmlc#5533)

* Write binary header. (dmlc#5532)

* Purge device_helpers.cuh (dmlc#5534)

* Simplifications with caching_device_vector

* Purge device helpers

* [dask] dask cudf inplace prediction. (dmlc#5512)

* Add inplace prediction for dask-cudf.

* Remove Dockerfile.release, since it's not used anywhere

* Use Conda exclusively in CUDF and GPU containers

* Improve cupy memory copying.

* Add skip marks to tests.

* Add mgpu-cudf category on the CI to run all distributed tests.

Co-authored-by: Hyunsu Cho <[email protected]>

* [CI] Use Ubuntu 18.04 LTS in JVM CI, because 19.04 is EOL (dmlc#5537)

* [jvm-packages] [CI] Publish XGBoost4J JARs with Scala 2.11 and 2.12 (dmlc#5539)

* Fix CLI model IO. (dmlc#5535)


* Add test for comparing Python and CLI training result.

* Fix uninitialized value bug in xgboost callback (dmlc#5463)


Co-authored-by: Philip Hyunsu Cho <[email protected]>

* Use thrust functions instead of custom functions (dmlc#5544)

* Optimizations for RNG in InitData kernel (dmlc#5522)

* optimizations for subsampling in InitData

* optimizations for subsampling in InitData

Co-authored-by: SHVETS, KIRILL <[email protected]>

* Add missing aft parameters. [skip ci] (dmlc#5553)

* Don't use uint for threads. (dmlc#5542)

* Fix skl nan tag. (dmlc#5538)

* Assert matching length of evaluation inputs. (dmlc#5540)

* Fix r interaction constraints (dmlc#5543)

* Unify the parsing code.

* Cleanup.

* Fix slice and get info. (dmlc#5552)

* gpu_hist performance fixes (dmlc#5558)

* Remove unnecessary cuda API calls

* Fix histogram memory growth

* Use non-synchronising scan (dmlc#5560)

* Fix non-openmp build. (dmlc#5566)


* Add test to Jenkins.
* Fix threading utils tests.
* Require thread library.

* Don't set seed on CLI interface. (dmlc#5563)

* [jvm-packages] XGBoost Spark should deal with NaN when parsing evaluation output (dmlc#5546)

* Group aware GPU sketching. (dmlc#5551)

* Group aware GPU weighted sketching.

* Distribute group weights to each data point.
* Relax the test.
* Validate input meta info.
* Fix metainfo copy ctor.

* Fix configuration I load model. (dmlc#5562)

* [Breaking] Set output margin to True for custom objective. (dmlc#5564)

* Set output margin to True for custom objective in Python and R.

* Add a demo for writing multi-class custom objective function.

* Run tests on selected demos.

* For histograms, opting into maximum shared memory available per block. (dmlc#5491)

* Use cudaDeviceGetAttribute instead of cudaGetDeviceProperties (dmlc#5570)

* Restore attributes in complete. (dmlc#5573)

* Enable parameter validation for R. (dmlc#5569)

* Enable parameter validation for R.

* Add test.

* Update document. (dmlc#5572)

* Port R compatibility patches from 1.0.0 release branch (dmlc#5577)

* Don't use memset to set struct when compiling for R

* Support 32-bit Solaris target for R package

* [CI] Use Vault repository to re-gain access to devtoolset-4 (dmlc#5589)

* [CI] Use Vault repository to re-gain access to devtoolset-4

* Use manylinux2010 tag

* Update Dockerfile.jvm

* Fix rename_whl.py

* Upgrade Pip, to handle manylinux2010 tag

* Update insert_vcomp140.py

* Update test_python.sh

* Avoid rabit calls in learner configuration (dmlc#5581)

* Hide C++ symbols in libxgboost.so when building Python wheel (dmlc#5590)

* Hide C++ symbols in libxgboost.so when building Python wheel

* Update Jenkinsfile

* Add test

* Upgrade rabit

* Add setup.py option.

Co-authored-by: fis <[email protected]>

* Set device in device dmatrix. (dmlc#5596)

* Fix compilation on Mac OSX High Sierra (10.13) (dmlc#5597)

* Fix compilation on Mac OSX High Sierra

* [CI] Build Mac OSX binary wheel using Travis CI

* [CI] Grant public read access to Mac OSX wheels (dmlc#5602)

* [R] Address warnings to comply with CRAN submission policy (dmlc#5600)

* [R] Address warnings to comply with CRAN submission policy

* Include <xgboost/logging.h>

* Instruct Mac users to install libomp (dmlc#5606)

* Clarify meaning of `training` parameter in XGBoosterPredict() (dmlc#5604)

Co-authored-by: Hyunsu Cho <[email protected]>
Co-authored-by: Jiaming Yuan <[email protected]>

* Better message when no GPU is found. (dmlc#5594)

* Refactor the CLI. (dmlc#5574)


* Enable parameter validation.
* Enable JSON.
* Catch `dmlc::Error`.
* Show help message.

* Move dask tutorial closer other distributed tutorials (dmlc#5613)

* Refactor gpu_hist split evaluation (dmlc#5610)

* Refactor

* Rewrite evaluate splits

* Add more tests

* Fix build on big endian CPUs (dmlc#5617)

* Fix build on big endian CPUs

* Clang-tidy

* Remove dead code. (dmlc#5635)

* Move device dmatrix construction code into ellpack. (dmlc#5623)

* Enhance nvtx support. (dmlc#5636)

* Support 64bit seed. (dmlc#5643)

* Resolve vector<bool>::iterator crash (dmlc#5642)

* Reduce device synchronisation (dmlc#5631)

* Reduce device synchronisation

* Initialise pinned memory

* Upgrade to CUDA 10.0 (dmlc#5649) (dmlc#5652)

Co-authored-by: fis <[email protected]>

Co-authored-by: Philip Hyunsu Cho <[email protected]>

* skip missing lookup if nothing is missing in CPU hist partition kernel. (dmlc#5644)

* [xgboost] skip missing lookup if nothing is missing

* Update Python demos with tests. (dmlc#5651)

* Remove GPU memory usage demo.
* Add tests for demos.
* Remove `silent`.
* Remove shebang as it's not portable.

* Add JSON schema to model dump. (dmlc#5660)

* Pseudo-huber loss metric added (dmlc#5647)


- Add pseudo huber loss objective.
- Add pseudo huber loss metric.

Co-authored-by: Reetz <[email protected]>

* [JVM Packages] Catch dmlc error by ref. (dmlc#5678)

* Remove silent from R demos. (dmlc#5675)

* Remove silent from R demos.

* Vignettes.

* add pointers to the gpu external memory paper (dmlc#5684)

* Distributed optimizations for 'hist' method with CPUs (dmlc#5557)


Co-authored-by: SHVETS, KIRILL <[email protected]>

* Document more objective parameters in R package (dmlc#5682)

* C++14 for xgboost (dmlc#5664)

* Implement Python data handler. (dmlc#5689)


* Define data handlers for DMatrix.
* Throw ValueError in scikit learn interface.

* [R-package] Reduce duplication in configure.ac (dmlc#5693)


* updated configure

* Remove redundant sketching. (dmlc#5700)

* [R] Fix duplicated libomp.dylib error on Mac OSX (dmlc#5701)

* Fix IsDense. (dmlc#5702)

* Let XGBoostError inherit ValueError. (dmlc#5696)

* Define _CRT_SECURE_NO_WARNINGS to remove unneeded warnings in MSVC (dmlc#5434)

* Changed build.rst (binary wheels are supported for macOS also) (dmlc#5711)

* [CI] Remove CUDA 9.0 from Windows CI. (dmlc#5674)

* Remove CUDA 9.0 on Windows CI.

* Require cuda10 tag, to differentiate

Co-authored-by: Philip Hyunsu Cho <[email protected]>

* Require CUDA 10.0+ in CMake build (dmlc#5718)

* Require Python 3.6+; drop Python 3.5 from CI (dmlc#5715)

* [dask] Return GPU Series when input is from cuDF. (dmlc#5710)


* Refactor predict function.

* [Doc] Fix typos in AFT tutorial (dmlc#5716)

* gpu_hist performance tweaks (dmlc#5707)

* Remove device vectors

* Remove allreduce synchronize

* Remove double buffer

* Allow pass fmap to importance plot (dmlc#5719)

Co-authored-by: Peter Jung <[email protected]>
Co-authored-by: Hyunsu Cho <[email protected]>

* Fix release degradation (dmlc#5720)

* fix release degradation, related to 5666

* less resizes

Co-authored-by: SHVETS, KIRILL <[email protected]>

* Fix loading old model. (dmlc#5724)


* Add test.

* Bump version to 1.2.0 snapshot in master (dmlc#5733)

* Add swift package reference (dmlc#5728)

Co-authored-by: Peter Jung <[email protected]>

* Don't use mask in array interface. (dmlc#5730)

* Bump version in header. (dmlc#5742)

* [CI] Remove CUDA 9.0 from CI (dmlc#5745)

* Add pkgconfig to cmake (dmlc#5744)

* Add pkgconfig to cmake

* Move xgboost.pc.in to cmake/

Co-authored-by: Peter Jung <[email protected]>
Co-authored-by: Hyunsu Cho <[email protected]>

* Expose device sketching in header. (dmlc#5747)

* Add Python binding for rabit ops. (dmlc#5743)

* Add float32 histogram (dmlc#5624)

* new single_precision_histogram param was added.

Co-authored-by: SHVETS, KIRILL <[email protected]>
Co-authored-by: fis <[email protected]>

* Reorder includes. (dmlc#5749)

* Reorder includes.

* R.

* Remove `max.depth` in R gblinear example. (dmlc#5753)

* Speed up python test (dmlc#5752)

* Speed up tests

* Prevent DeviceQuantileDMatrix initialisation with numpy

* Use joblib.memory

* Use RandomState

* Add helper for generating batches of data. (dmlc#5756)

* Add helper for generating batches of data.

* VC keyword clash.

* Another clash.

* Remove column major specialization. (dmlc#5755)


Co-authored-by: Hyunsu Cho <[email protected]>

* Document addition of new committer @SmirnovEgorRu (dmlc#5762)

* Add release note for 1.1.0 in NEWS.md (dmlc#5763)

* Add release note for 1.1.0 in NEWS.md

* Address reviewer's feedback

* Revert "Reorder includes. (dmlc#5749)" (dmlc#5771)

This reverts commit d3a0efb.

* [python-package] remove unused imports (dmlc#5776)

* Added conda environment file for building docs (dmlc#5773)

* [R] replace uses of T and F with TRUE and FALSE (dmlc#5778)

* [R-package] replace uses of T and F with TRUE and FALSE

* enable linting

* Remove skip

Co-authored-by: Philip Hyunsu Cho <[email protected]>

* Implement weighted sketching for adapter. (dmlc#5760)


* Bounded memory tests.
* Fixed memory estimation.

* Avoid including `c_api.h` in header files. (dmlc#5782)

* Implement `Empty` method for host device vector. (dmlc#5781)

* Fix accessing nullptr.

* Bump com.esotericsoftware to 4.0.2 (dmlc#5690)

Co-authored-by: Antti Saukko <[email protected]>

* [DOC] Mention dask blog post in doc. [skip ci] (dmlc#5789)

* [R] Remove dependency on gendef for Visual Studio builds (fixes dmlc#5608) (dmlc#5764)

* [R-package] Remove dependency on gendef for Visual Studio builds (fixes dmlc#5608)

* clarify docs

* removed debugging print statement

* Make R CMake install more robust

* Fix doc format; add ToC

* Update build.rst

* Fix AppVeyor

Co-authored-by: Hyunsu Cho <[email protected]>

* Add new skl model attribute for number of features (dmlc#5780)

* Fix exception causes all over the codebase (dmlc#5787)

* Use hypothesis (dmlc#5759)

* Use hypothesis

* Allow int64 array interface for groups

* Add packages to Windows CI

* Add to travis

* Make sure device index is set correctly

* Fix dask-cudf test

* appveyor

* Accept string for ArrayInterface constructor.

* Revert "Accept string for ArrayInterface constructor."

This reverts commit e8ecafb.

* Implement fast number serialization routines. (dmlc#5772)

* Implement ryu algorithm.
* Implement integer printing.
* Full coverage roundtrip test.

* Add cupy to Windows CI (dmlc#5797)

* Add cupy to Windows CI

* Update Jenkinsfile-win64

Co-authored-by: Philip Hyunsu Cho <[email protected]>

* Update Jenkinsfile-win64

Co-authored-by: Philip Hyunsu Cho <[email protected]>

* Update tests/python-gpu/test_gpu_prediction.py

Co-authored-by: Philip Hyunsu Cho <[email protected]>

Co-authored-by: Philip Hyunsu Cho <[email protected]>

* Add an option to run brute-force test for JSON round-trip (dmlc#5804)

* Add an option to run brute-force test for JSON round-trip

* Apply reviewer's feedback

* Remove unneeded objects

* Parallel run.

* Max.

* Use signed 64-bit loop var, to support MSVC

* Add exhaustive test to CI

* Run JSON test in Win build worker

* Revert "Run JSON test in Win build worker"

This reverts commit c97b2c7.

* Revert "Add exhaustive test to CI"

This reverts commit c149c2c.

Co-authored-by: fis <[email protected]>

* [CI] Fix cuDF install; merge 'gpu' and 'cudf' test suite (dmlc#5814)

* Implement extend method for meta info. (dmlc#5800)

* Implement extend for host device vector.

* Update rabit. (dmlc#5680)

* Update document for model dump. (dmlc#5818)

* Clarify the relationship between dump and save.
* Mention the schema.

* [Doc] Fix rendering of Markdown docs, e.g. R doc (dmlc#5821)

* Remove unweighted GK quantile. (dmlc#5816)

* Rename Ant Financial to Ant Group (dmlc#5827)

* Accept string for ArrayInterface constructor. (dmlc#5799)

* Implement a DMatrix Proxy. (dmlc#5803)

* Relax test for shotgun. (dmlc#5835)

* Relax linear test. (dmlc#5849)

* Increased error in coordinate is mostly due to floating point error.
* Shotgun uses Hogwild!, which is non-deterministic and can have even greater
floating point error.

* Implement iterative DMatrix. (dmlc#5837)

* Ensure that LoadSequentialFile() actually read the whole file (dmlc#5831)

* Add c-api-demo to .gitignore (dmlc#5855)

* Use dmlc stream when URI protocol is not local file. (dmlc#5857)

* Move feature names and types of DMatrix from Python to C++. (dmlc#5858)


* Add thread local return entry for DMatrix.
* Save feature name and feature type in binary file.

Co-authored-by: Philip Hyunsu Cho <[email protected]>

* Split Features into Groups to Compute Histograms in Shared Memory (dmlc#5795)

* Implement GK sketching on GPU. (dmlc#5846)

* Implement GK sketching on GPU.
* Strong tests on quantile building.
* Handle sparse dataset by binary searching the column index.
* Hypothesis test on dask.

* Accept iterator in device dmatrix. (dmlc#5783)


* Remove Device DMatrix.

* Remove print. (dmlc#5867)

* fix device sketch with weights in external memory mode (dmlc#5870)

* [Doc] Document that CUDA 10.0 is required [skip ci] (dmlc#5872)

* [CI] Simplify CMake build with modern CMake techniques (dmlc#5871)

* [CI] Simplify CMake build

* Make sure that plugins can be built

* [CI] Install lz4 on Mac

* Add new parameter singlePrecisionHistogram to xgboost4j-spark (dmlc#5811)

Expose the existing 'singlePrecisionHistogram' param to the Spark layer.

* Upgrade Rabit (dmlc#5876)

* [jvm-packages] update spark dependency to 3.0.0 (dmlc#5836)

* Cleanup on device sketch. (dmlc#5874)

* Remove old functions.

* Merge weighted and un-weighted into a common interface.

* [CI] Enforce daily budget in Jenkins CI (dmlc#5884)

* [CI] Throttle Jenkins CI

* Don't use Jenkins master instance

* Add XGBoosterGetNumFeature (dmlc#5856)

- add GetNumFeature to Learner
- add XGBoosterGetNumFeature to C API
- update c-api-demo accordingly

* Fix NDK Build. (dmlc#5886)

* Explicit cast for slice.

* [CI] Reduce load on Windows CI pipeline (dmlc#5892)

* Fix R package build with CMake 3.13 (dmlc#5895)

* Fix R package build with CMake 3.13

* Require OpenMP for xgboost-r target

* Simplify the data backends. (dmlc#5893)

* [CI] update spark version to 3.0.0 (dmlc#5890)

* [CI] update spark version to 3.0.0

* Update Dockerfile.jvm_cross

Co-authored-by: Philip Hyunsu Cho <[email protected]>

* Fix sketch size calculation. (dmlc#5898)

* Dask device dmatrix (dmlc#5901)


* Fix softprob with empty dmatrix.

* GPU implementation of AFT survival objective and metric (dmlc#5714)

* Add interval accuracy

* De-virtualize AFT functions

* Lint

* Refactor AFT metric using GPU-CPU reducer

* Fix R build

* Fix build on Windows

* Fix copyright header

* Clang-tidy

* Fix crashing demo

* Fix typos in comment; explain GPU ID

* Remove unnecessary #include

* Add C++ test for interval accuracy

* Fix a bug in accuracy metric: use log pred

* Refactor AFT objective using GPU-CPU Transform

* Lint

* Fix lint

* Use Ninja to speed up build

* Use time, not /usr/bin/time

* Add cpu_build worker class, with concurrency = 1

* Use concurrency = 1 only for CUDA build

* concurrency = 1 for clang-tidy

* Address reviewer's feedback

* Update link to AFT paper

* Fix Windows 2016 build. (dmlc#5902)

* Further improvements and savings in Jenkins pipeline (dmlc#5904)

* Publish artifacts only on the master and release branches

* Build CUDA only for Compute Capability 7.5 when building PRs

* Run all Windows jobs in a single worker image

* Build nightly XGBoost4J SNAPSHOT JARs with Scala 2.12 only

* Show skipped Python tests on Windows

* Make Graphviz optional for Python tests

* Add back C++ tests

* Unstash xgboost_cpp_tests

* Fix label to CUDA 10.1

* Install cuPy for CUDA 10.1

* Install jsonschema

* Address reviewer's feedback

* Support building XGBoost with CUDA 11 (dmlc#5808)

* Change serialization test.
* Add CUDA 11 tests on Linux CI.

Co-authored-by: Philip Hyunsu Cho <[email protected]>

* Add Github Action for R. (dmlc#5911)

* Fix lintr errors.

* Fix typo in CI. [skip ci] (dmlc#5919)

* [Doc] Document new objectives and metrics available on GPUs (dmlc#5909)

* Fix mingw build with R. (dmlc#5918)

* Add option to enable all compiler warnings in GCC/Clang (dmlc#5897)

* Add option to enable all compiler warnings in GCC/Clang

* Fix -Wall for CUDA sources

* Make -Wall private req for xgboost-r

* Setup github action. (dmlc#5917)

* Remove R and JVM from appveyor. (dmlc#5922)

* Fix r early stop with custom objective. (dmlc#5923)

* Specify `ntreelimit`.

* Add explicit template specialization for portability (dmlc#5921)

* Add explicit template specializations

* Adding Specialization for FileAdapterBatch

* Cache dependencies on Github Action. (dmlc#5928)

* Use `cudaOccupancyMaxPotentialBlockSize` to calculate the block size. (dmlc#5926)

* [BLOCKING] Handle empty rows in data iterators correctly (dmlc#5929)

* [jvm-packages] Handle empty rows in data iterators correctly

* Fix clang-tidy error

* last empty row

* Add comments [skip ci]

Co-authored-by: Nan Zhu <[email protected]>

* [CI] Make Python model compatibility test runnable locally (dmlc#5941)

* [BLOCKING] Remove to_string. (dmlc#5934)

* [R] Add a compatibility layer to load Booster object from an old RDS file (dmlc#5940)

* [R] Add a compatibility layer to load Booster from an old RDS
* Modify QuantileHistMaker::LoadConfig() to be backward compatible with 1.1.x
* Add a big warning about compatibility in QuantileHistMaker::LoadConfig()
* Add testing suite
* Discourage use of saveRDS() in CRAN doc

* [R] Enable weighted learning to rank (dmlc#5945)

* [R] enable weighted learning to rank

* Add R unit test for ranking

* Fix lint

* [BLOCKING] [jvm-packages] add gpu_hist and enable gpu scheduling (dmlc#5171)

* [jvm-packages] add gpu_hist tree method

* change updater hist to grow_quantile_histmaker

* add gpu scheduling

* pass correct parameters to xgboost library

* remove debug info

* add use.cuda for pom

* add CI for gpu_hist for jvm

* add gpu unit tests

* use gpu node to build jvm

* use nvidia-docker

* Add CLI interface to create_jni.py using argparse

Co-authored-by: Hyunsu Cho <[email protected]>

* [CI] Improve R linter script (dmlc#5944)

* [CI] Move lint to a separate script

* [CI] Improved lintr launcher

* Add lintr as a separate action

* Add custom parsing logic to print out logs

* Fix lintr issues in demos

* Run R demos

* Fix CRAN checks

* Install XGBoost into R env before running lintr

* Install devtools (needed to run demos)

* Fix prediction heuristic (dmlc#5955)


* Relax check for prediction.
* Relax test in spark test.
* Add tests in C++.

* [Breaking] Fix custom metric for multi output. (dmlc#5954)


* Set output margin to true for custom metric.  This fixes only R and Python.

* Disable feature validation on sklearn predict prob. (dmlc#5953)


* Fix issue when scikit learn interface receives transformed inputs.

* [CI] Fix broken Docker container 'cpu' (dmlc#5956)

* Fix evaluate root split. (dmlc#5948)

* [Dask] Asyncio support. (dmlc#5862)

* Thread-safe prediction by making the prediction cache thread-local. (dmlc#5853)

Co-authored-by: Jiaming Yuan <[email protected]>

* Force colored output for ninja build. (dmlc#5959)

* Update XGBoost + Dask overview documentation (dmlc#5961)

* Add imports to code snippet

* Better writing.

* Add CMake flag to log C API invocations, to aid debugging (dmlc#5925)

* Add CMake flag to log C API invocations, to aid debugging

* Remove unnecessary parentheses

* [CI] Assign larger /dev/shm to NCCL (dmlc#5966)

* [CI] Assign larger /dev/shm to NCCL

* Use 10.2 artifact to run multi-GPU Python tests

* Add CUDA 10.0 -> 11.0 cross-version test; remove CUDA 10.0 target

* Add missing Pytest marks to AsyncIO unit test (dmlc#5968)

* [R] Provide better guidance for persisting XGBoost model (dmlc#5964)

* [R] Provide better guidance for persisting XGBoost model

* Update saving_model.rst

* Add a paragraph about xgb.serialize()

* [jvm-packages] Fix wrong method name `setAllowZeroForMissingValue`. (dmlc#5740)

* Allow non-zero for missing value when training.

* Fix wrong method names.

* Add a unit test

* Move the getter/setter unit test to MissingValueHandlingSuite

Co-authored-by: Hyunsu Cho <[email protected]>

* Export DaskDeviceQuantileDMatrix in doc. [skip ci] (dmlc#5975)

* Fix sklearn doc. (dmlc#5980)

* Update Python custom objective demo. (dmlc#5981)

* Update JSON schema. (dmlc#5982)


* Update JSON schema for pseudo huber.
* Update JSON model schema.

* Fix missing data warning. (dmlc#5969)

* Fix data warning.

* Add numpy/scipy test.

* Enforce tree order in JSON. (dmlc#5974)


* Make JSON model IO more future proof by using tree id in model loading.

* Fix dask predict shape infer. (dmlc#5989)

* [R] fix uses of 1:length(x) and other small things (dmlc#5992)

* Fix typo in tracker logging (dmlc#5994)

* Introducing DPC++-based plugin (predictor, objective function) supporting oneAPI programming model (dmlc#5825)

* Added plugin with DPC++-based predictor and objective function

* Update CMakeLists.txt

* Update regression_obj_oneapi.cc

* Added README.md for OneAPI plugin

* Added OneAPI predictor support to gbtree

* Update README.md

* Merged kernels in gradient computation. Enabled multiple loss functions with DPC++ backend

* Aligned plugin CMake files with latest master changes. Fixed whitespace typos

* Removed debug output

* [CI] Make oneapi_plugin a CMake target

* Added tests for OneAPI plugin for predictor and obj. functions

* Temporarily switched to default selector for device dispacthing in OneAPI plugin to enable execution in environments without gpus

* Updated readme file.

* Fixed USM usage in predictor

* Removed workaround with explicit templated names for DPC++ kernels

* Fixed warnings in plugin tests

* Fix CMake build of gtest

Co-authored-by: Hyunsu Cho <[email protected]>

* Remove skmaker. (dmlc#5971)

* Rabit update. (dmlc#5978)

* Remove parameter on JVM Packages.

* Move warning about empty dataset. (dmlc#5998)

* [Breaking] Fix .predict() method and add .predict_proba() in xgboost.dask.DaskXGBClassifier (dmlc#5986)

* Unify CPU hist sketching (dmlc#5880)

* Fix nightly build doc. [skip ci] (dmlc#6004)

* Fix nightly build doc. [skip ci]

* Fix title too short. [skip ci]

* RMM integration plugin (dmlc#5873)

* [CI] Add RMM as an optional dependency

* Replace caching allocator with pool allocator from RMM

* Revert "Replace caching allocator with pool allocator from RMM"

This reverts commit e15845d.

* Use rmm::mr::get_default_resource()

* Try setting default resource (doesn't work yet)

* Allocate pool_mr in the heap

* Prevent leaking pool_mr handle

* Separate EXPECT_DEATH() in separate test suite suffixed DeathTest

* Turn off death tests for RMM

* Address reviewer's feedback

* Prevent leaking of cuda_mr

* Fix Jenkinsfile syntax

* Remove unnecessary function in Jenkinsfile

* [CI] Install NCCL into RMM container

* Run Python tests

* Try building with RMM, CUDA 10.0

* Do not use RMM for CUDA 10.0 target

* Actually test for test_rmm flag

* Fix TestPythonGPU

* Use CNMeM allocator, since pool allocator doesn't yet support multiGPU

* Use 10.0 container to build RMM-enabled XGBoost

* Revert "Use 10.0 container to build RMM-enabled XGBoost"

This reverts commit 789021f.

* Fix Jenkinsfile

* [CI] Assign larger /dev/shm to NCCL

* Use 10.2 artifact to run multi-GPU Python tests

* Add CUDA 10.0 -> 11.0 cross-version test; remove CUDA 10.0 target

* Rename Conda env rmm_test -> gpu_test

* Use env var to opt into CNMeM pool for C++ tests

* Use identical CUDA version for RMM builds and tests

* Use Pytest fixtures to enable RMM pool in Python tests

* Move RMM to plugin/CMakeLists.txt; use PLUGIN_RMM

* Use per-device MR; use command arg in gtest

* Set CMake prefix path to use Conda env

* Use 0.15 nightly version of RMM

* Remove unnecessary header

* Fix a unit test when cudf is missing

* Add RMM demos

* Remove print()

* Use HostDeviceVector in GPU predictor

* Simplify pytest setup; use LocalCUDACluster fixture

* Address reviewers' commments

Co-authored-by: Hyunsu Cho <[email protected]>

Co-authored-by: Jiaming Yuan <[email protected]>
Co-authored-by: James Lamb <[email protected]>
Co-authored-by: sriramch <[email protected]>
Co-authored-by: Rory Mitchell <[email protected]>
Co-authored-by: Avinash Barnwal <[email protected]>
Co-authored-by: Philip Cho <[email protected]>
Co-authored-by: ShvetsKS <[email protected]>
Co-authored-by: Paul Kaefer <[email protected]>
Co-authored-by: Yuan Tang <[email protected]>
Co-authored-by: Rong Ou <[email protected]>
Co-authored-by: Zhang Zhang <[email protected]>
Co-authored-by: Bobby Wang <[email protected]>
Co-authored-by: Liang-Chi Hsieh <[email protected]>
Co-authored-by: Nicolas Scozzaro <[email protected]>
Co-authored-by: Kamil A. Kaczmarek <[email protected]>
Co-authored-by: Melissa Kohl <[email protected]>
Co-authored-by: SHVETS, KIRILL <[email protected]>
Co-authored-by: Liang-Chi Hsieh <[email protected]>
Co-authored-by: Andy Adinets <[email protected]>
Co-authored-by: Jason E. Aten, Ph.D <[email protected]>
Co-authored-by: Oleksandr Kuvshynov <[email protected]>
Co-authored-by: LionOrCatThatIsTheQuestion <[email protected]>
Co-authored-by: Reetz <[email protected]>
Co-authored-by: Lorenz Walthert <[email protected]>
Co-authored-by: Dmitry Mottl <[email protected]>
Co-authored-by: Peter Jung <[email protected]>
Co-authored-by: Peter Jung <[email protected]>
Co-authored-by: Elliot Hershberg <[email protected]>
Co-authored-by: anttisaukko <[email protected]>
Co-authored-by: Antti Saukko <[email protected]>
Co-authored-by: Alex <[email protected]>
Co-authored-by: Ram Rachum <[email protected]>
Co-authored-by: Alexander Gugel <[email protected]>
Co-authored-by: Nan Zhu <[email protected]>
Co-authored-by: boxdot <[email protected]>
Co-authored-by: James Bourbeau <[email protected]>
Co-authored-by: Shaochen Shi <[email protected]>
Co-authored-by: Anthony D'Amato <[email protected]>
Co-authored-by: Vladislav Epifanov <[email protected]>
Co-authored-by: jameskrach <[email protected]>
Co-authored-by: Hyunsu Cho <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants