-
Notifications
You must be signed in to change notification settings - Fork 304
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG] Reindex Start Vertices and Batch Ids Prior to Sampling Call #3393
Merged
rapids-bot
merged 9 commits into
rapidsai:branch-23.04
from
alexbarghi-nv:cugraph-gnn-fix-sample-index
Apr 3, 2023
Merged
[BUG] Reindex Start Vertices and Batch Ids Prior to Sampling Call #3393
rapids-bot
merged 9 commits into
rapidsai:branch-23.04
from
alexbarghi-nv:cugraph-gnn-fix-sample-index
Apr 3, 2023
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
alexbarghi-nv
added
bug
Something isn't working
non-breaking
Non-breaking change
labels
Mar 29, 2023
VibhuJawa
suggested changes
Mar 30, 2023
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
PR looks good. Thanks for debugging but we should add a test to catch it please.
2 tasks
rlratzel
approved these changes
Mar 30, 2023
VibhuJawa
approved these changes
Mar 30, 2023
jnke2016
approved these changes
Mar 31, 2023
rapids-bot bot
pushed a commit
that referenced
this pull request
Apr 2, 2023
This PR adds a working Multi-GPU Graph (on 2 dask workers) being trained/loaded on multiple pytorch trainers. (3) Todo: - [x] Verify works on multiple trainers and multiple dask workers - [x] Show scaling as you increase training GPUs At 1 second we become bottlenecked by sampling dask cluster, but we see perf improvement by going from `1 GPU`->`2GPU`. **On OBGN-Products** ```md | Number of Training GPUs | Time per epoch | |-------------------------|----------------| | 1 | 2.3 s | | 2 | 0.582 s | | 4 | 0.792 s | ``` This PR depends upon: #3393 CC: @rlratzel , @alexbarghi-nv , @BradReesWork Authors: - Vibhu Jawa (https://github.com/VibhuJawa) - Alex Barghi (https://github.com/alexbarghi-nv) Approvers: - Alex Barghi (https://github.com/alexbarghi-nv) URL: #3212
/merge |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR fixes a bug where output sample batch ids do not match those expected when using the bulk sampler, causing subgraphs that are larger than expected and incorrect. Without reindexing, the wrong batch ids are assigned to the start vertices. Reindexing ensures that the same order is preserved for batch ids and start vertices.
This PR also changes the empty dataframe passed to dask in
uniform_neighbor_sample
to match the correct ordering of batch_id and hop_id. This ensures that the columns are named correctly and are not inadvertently renamed due to them being created in a different order.This PR is non-breaking because it restores the original behavior of bulk sampling and reverses a bug that was inadvertently introduced with the dask updates.
Resolves #3390