Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[REVIEW] Using 64-bit array lengths to increase scale of pca & tsvd #3983

Merged
merged 4 commits into from
Jul 23, 2021

Conversation

cjnolet
Copy link
Member

@cjnolet cjnolet commented Jun 11, 2021

Addresses #2459 (likely not all of it)

@cjnolet cjnolet requested a review from a team as a code owner June 11, 2021 20:06
@cjnolet cjnolet added bug Something isn't working non-breaking Non-breaking change labels Jun 11, 2021
@cjnolet cjnolet changed the title [WIP] Using 64-bit array lengths to increase scale of several algorithms [REVIEW] Using 64-bit array lengths to increase scale of pca & tsvd Jul 21, 2021
@cjnolet
Copy link
Member Author

cjnolet commented Jul 21, 2021

I had started going down the path of updating the cublas/cusolver wrappers and then realized some of them are actually failing before the calls to cusolver (though I think the failure manifests during the cusolver calls because of the overflow).

Copy link
Member

@dantegd dantegd left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changes look good, it just seems like the latest merging of upstream left some extraneous lines

auto allocator = handle.get_device_allocator();
device_buffer<math_t> components_all(allocator, stream, len);
device_buffer<math_t> explained_var_all(allocator, stream, prms.n_cols);
device_buffer<math_t> explained_var_ratio_all(allocator, stream, prms.n_cols);

<<<<<<< HEAD
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Straggling line from conflict resolution

auto allocator = handle.get_device_allocator();
device_buffer<math_t> components_all(allocator, stream, len);
device_buffer<math_t> explained_var_all(allocator, stream, prms.n_cols);
device_buffer<math_t> explained_var_ratio_all(allocator, stream, prms.n_cols);

<<<<<<< HEAD
printf("About to call calEig\n");
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Straggling print

calEig<math_t, enum_solver>(
handle, in, components_all.data(), explained_var_all.data(), prms, stream);

printf("Called calEig\n");
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Straggling print

explained_var_all.data(), prms.n_cols, explained_var, prms.n_components, 1, stream);
raft::matrix::truncZeroOrigin(
explained_var_ratio_all.data(), prms.n_cols, explained_var_ratio, prms.n_components, 1, stream);
=======
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Straggling line from conflict resolution

@@ -60,6 +75,7 @@ void truncCompExpVars(const raft::handle_t& handle,
explained_var_all.data(), prms.n_cols, explained_var, prms.n_components, 1, stream);
raft::matrix::truncZeroOrigin(
explained_var_ratio_all.data(), prms.n_cols, explained_var_ratio, prms.n_components, 1, stream);
>>>>>>> branch-21.08
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Straggling line from conflict resolution

@dantegd
Copy link
Member

dantegd commented Jul 22, 2021

@gpucibot merge

@codecov-commenter
Copy link

Codecov Report

❗ No coverage uploaded for pull request base (branch-21.08@11088d6). Click here to learn what that means.
The diff coverage is n/a.

Impacted file tree graph

@@               Coverage Diff               @@
##             branch-21.08    #3983   +/-   ##
===============================================
  Coverage                ?   85.77%           
===============================================
  Files                   ?      231           
  Lines                   ?    18261           
  Branches                ?        0           
===============================================
  Hits                    ?    15664           
  Misses                  ?     2597           
  Partials                ?        0           
Flag Coverage Δ
dask 48.19% <0.00%> (?)
non-dask 78.24% <0.00%> (?)

Flags with carried forward coverage won't be shown. Click here to find out more.


Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 11088d6...1b27783. Read the comment docs.

@rapids-bot rapids-bot bot merged commit 40af8af into rapidsai:branch-21.08 Jul 23, 2021
vimarsh6739 pushed a commit to vimarsh6739/cuml that referenced this pull request Oct 9, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working CUDA/C++ non-breaking Non-breaking change
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants