Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Centralizing Partitioning State #1263

Merged
merged 13 commits into from
Sep 22, 2022
Merged

Centralizing Partitioning State #1263

merged 13 commits into from
Sep 22, 2022

Conversation

narendasan
Copy link
Collaborator

Description

It is incredibly hard to track the state of the partitioning process because the pipeline relies on a number of shared lookup tables that are modified in place during the course of partitioning.

PartitioningCtx, similar to ConversionCtx centralizes key information like decisions about node executors, user settings, the in progress graph etc. so that this information is uniformly managed and queryable at any point during the pipelines execution. This means it is hard to client code to alter state in unpredictable ways and it also reduces the amount of arguments that need to be passed to each phase.

Addresses #949

Type of change

Please delete options that are not relevant and/or add your own.

  • New feature (non-breaking change which adds functionality)

Checklist:

  • My code follows the style guidelines of this project (You can use the linters)
  • I have performed a self-review of my own code
  • I have commented my code, particularly in hard-to-understand areas and hacks
  • I have made corresponding changes to the documentation
  • I have added tests to verify my fix or my feature
  • New and existing unit tests pass locally with my changes
  • I have added the relevant labels to my PR in so that relevant reviewers are notified

Signed-off-by: Naren Dasan <[email protected]>
Signed-off-by: Naren Dasan <[email protected]>
Signed-off-by: Naren Dasan <[email protected]>
Signed-off-by: Naren Dasan <[email protected]>
@narendasan narendasan added the WIP Work is in progress, pull request should not be merged yet label Aug 14, 2022
@github-actions github-actions bot added component: api [C++] Issues re: C++ API component: core Issues re: The core compiler component: lowering Issues re: The lowering / preprocessing passes component: partitioning component: tests Issues re: Tests labels Aug 14, 2022
Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code conforms to C++ style guidelines

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code conforms to Python style guidelines

@narendasan
Copy link
Collaborator Author

@bowang007 This should be roughly equivalent to our current system. There are a couple improvements on this that I'd like to do that I want your input on.

  1. I renamed a bunch of stuff to hopefully help make it clearer to new devs, let me know if you disagree about anything.
  2. I want every node in the global_fallback_map (now called node_executor_decision_map) and make an explicit decision on if it will run in torch or tensorrt (this can be changed like if a block would run in trt but is not large enough). That way in debugging or otherwise I can just ask about a particular node and get back what it will run on. Right now I am having trouble doing this because there is an assumption in the system that if the node is not in the map it must run in tensorrt.
  3. There is still work to be done on being able to easily dump state at any given time.
  4. Are there other data sources that we might want to have managed by the context?

@narendasan narendasan added the release: v1.3 Tagged to be included in v1.3 label Aug 14, 2022
std::ostream& operator<<(std::ostream& os, const NodeExecutorDecision& format) {
switch (format) {
case NodeExecutorDecision::kUNSUPPORTED:
return os << "to run torch due to lack of converter support";
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"run in torch"

@@ -272,8 +269,8 @@ GraphAndMapping ConstructFallbackGraph(
// convert the 2 blocks in prim::if and get the converted graph with mappings
std::vector<GraphAndMapping> graph_and_mappings;
for (auto cur_block : if_node->blocks()) {
graph_and_mappings.push_back(
ConstructFallbackGraph(new_mod, cur_block, example_tensor_map, cfg, static_params, fallback_nodes));
graph_and_mappings.push_back(ConstructFallbackGraph_(
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not wildly pressing but is there a way to do all the partitioning beforehand then go through and compile specific blocks? Having them mixed is not as easy to debug

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@bowang007 thoughts here?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perhaps what we do is recursively partition, then recursively compile the final graph. Not sure if graph stitching can handle this right now.

Signed-off-by: Naren Dasan <[email protected]>
Signed-off-by: Naren Dasan <[email protected]>
@github-actions github-actions bot added the component: api [Python] Issues re: Python API label Aug 14, 2022
Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code conforms to C++ style guidelines

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code conforms to Python style guidelines

@@ -219,19 +219,16 @@ void AddIfBlockToGraph(
return;
}

GraphAndMapping ConstructFallbackGraph(
GraphAndMapping ConstructFallbackGraph_(
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Have a more distinguishing name for this maybe?

@bowang007
Copy link
Collaborator

bowang007 commented Aug 16, 2022

@bowang007 This should be roughly equivalent to our current system. There are a couple improvements on this that I'd like to do that I want your input on.

  1. I renamed a bunch of stuff to hopefully help make it clearer to new devs, let me know if you disagree about anything.
  2. I want every node in the global_fallback_map (now called node_executor_decision_map) and make an explicit decision on if it will run in torch or tensorrt (this can be changed like if a block would run in trt but is not large enough). That way in debugging or otherwise I can just ask about a particular node and get back what it will run on. Right now I am having trouble doing this because there is an assumption in the system that if the node is not in the map it must run in tensorrt.
  3. There is still work to be done on being able to easily dump state at any given time.
  4. Are there other data sources that we might want to have managed by the context?

Went through just now, I think the general architecture is very clear and comprehensive, here are some thoughts:
2. We can have a map for every nodes in the graph, logic might be a little bit different but shouldn't be to complicated.
3. more details on this?

@narendasan
Copy link
Collaborator Author

@bowang007 This should be roughly equivalent to our current system. There are a couple improvements on this that I'd like to do that I want your input on.

  1. I renamed a bunch of stuff to hopefully help make it clearer to new devs, let me know if you disagree about anything.
  2. I want every node in the global_fallback_map (now called node_executor_decision_map) and make an explicit decision on if it will run in torch or tensorrt (this can be changed like if a block would run in trt but is not large enough). That way in debugging or otherwise I can just ask about a particular node and get back what it will run on. Right now I am having trouble doing this because there is an assumption in the system that if the node is not in the map it must run in tensorrt.
  3. There is still work to be done on being able to easily dump state at any given time.
  4. Are there other data sources that we might want to have managed by the context?

Went through just now, I think the general architecture is very clear and comprehensive, here are some thoughts: 2. We can have a map for every nodes in the graph, logic might be a little bit different but shouldn't be to complicated. 3. more details on this?

I want to be able to enable features like #1257
which would be able to show a user info like inputs to a block, node assignments, shape calculation results etc.

@bowang007
Copy link
Collaborator

@narendasan If we store SegmentedBlocks

in PartitioningCtx, what about the sub-blocks' segmentedBlocks? Looks like it could get messed up if there is some node like If::node which contains sub-blocks.

@narendasan
Copy link
Collaborator Author

@narendasan If we store SegmentedBlocks


in PartitioningCtx, what about the sub-blocks' segmentedBlocks? Looks like it could get messed up if there is some node like If::node which contains sub-blocks.

Where do we put sub blocks now?

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code conforms to C++ style guidelines

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code conforms to Python style guidelines

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code conforms to C++ style guidelines

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code conforms to Python style guidelines

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are some changes that do not conform to C++ style guidelines:

diff --git a/home/runner/work/TensorRT/TensorRT/core/compiler.cpp b/tmp/changes.txt
index 178d3c4..8599be8 100644
--- a/home/runner/work/TensorRT/TensorRT/core/compiler.cpp
+++ b/tmp/changes.txt
@@ -142,7 +142,7 @@ partitioning::GraphAndMapping BuildHybridGraph(

  partitioning::Partition(&partitioning_ctx, collection_input_ivalues_map);

-  for (auto &partitioned_block : partitioning_ctx.partitioned_blocks) {
+  for (auto& partitioned_block : partitioning_ctx.partitioned_blocks) {
    partitioning::PartitionedGraph& segmented_blocks = partitioned_block.second;

    for (auto& seg_block : segmented_blocks) {
diff --git a/home/runner/work/TensorRT/TensorRT/core/partitioning/partitioning.cpp b/tmp/changes.txt
index 1a596a2..0d18f37 100644
--- a/home/runner/work/TensorRT/TensorRT/core/partitioning/partitioning.cpp
+++ b/tmp/changes.txt
@@ -73,7 +73,6 @@ void SetExplicitFallbackNodes(PartitioningCtx* ctx, torch::jit::Block* block) {
      // Set the rest nodes to TensorRt
      ctx->setNodeExecutorDecision(n, NodeExecutorDecision::kCONVERT);
    }
-
  }
  return;
}
@@ -236,9 +235,12 @@ void resolveTRTNonTensorInputs(PartitioningCtx* ctx, torch::jit::Block* block) {
        }
      }
      if (!inputs_to_resolve.empty()) {
-        std::vector<torch::jit::Node*> dependency_nodes = getDependencyNodes(inputs_to_resolve, cur_partitioned_block[i]);
+        std::vector<torch::jit::Node*> dependency_nodes =
+            getDependencyNodes(inputs_to_resolve, cur_partitioned_block[i]);
        dependency_nodes.insert(
-            dependency_nodes.end(), cur_partitioned_block[i].raw_nodes().begin(), cur_partitioned_block[i].raw_nodes().end());
+            dependency_nodes.end(),
+            cur_partitioned_block[i].raw_nodes().begin(),
+            cur_partitioned_block[i].raw_nodes().end());
        cur_partitioned_block[i] = SegmentedBlock(SegmentedBlock::kTensorRT, dependency_nodes);
      }
    }
@@ -339,7 +341,6 @@ void SegmentGraph(PartitioningCtx* ctx, torch::jit::Block* block) {

  std::vector<torch::jit::Node*> in_prog_trt_blk_nodes, in_prog_pyt_blk_nodes;
  for (const auto n : nodes) {
-
    // Skip constant nodes as they are resources for both kinds of modules
    if (n->kind() == torch::jit::prim::Constant) {
      continue;
@@ -438,7 +439,6 @@ void Partition(PartitioningCtx* ctx, ExampleIValues& example_tensor_map) {

  // Go through all the blocks to do the partitioning
  for (torch::jit::Block* block : ctx->original_blocks) {
-
    // Find all the fallback nodes and build execution decision LUT for all nodes
    SetNodeExecutorLUT(ctx, block);

@@ -453,22 +453,17 @@ void Partition(PartitioningCtx* ctx, ExampleIValues& example_tensor_map) {
    LOG_DEBUG("Registering input/output torch::jit::Value for segmented graphs");
    registerSegmentsOutputs(ctx, block);

-    for (auto &i : ctx->partitioned_blocks[block]) {
+    for (auto& i : ctx->partitioned_blocks[block]) {
      LOG_DEBUG(i);
    }

    // run shape analysis on each segmented block
    runShapeAnalysis(ctx, block, example_tensor_map);
-
  }

-
-
-//  for (uint64_t i = 0; i < ctx->blocks.size(); i++) {
-//    ctx->blocks[i].update_id(i);
-//  }
-
-
+  //  for (uint64_t i = 0; i < ctx->blocks.size(); i++) {
+  //    ctx->blocks[i].update_id(i);
+  //  }
}

} // namespace partitioning
diff --git a/home/runner/work/TensorRT/TensorRT/core/partitioning/stitching.cpp b/tmp/changes.txt
index f8a9633..42cfe9d 100644
--- a/home/runner/work/TensorRT/TensorRT/core/partitioning/stitching.cpp
+++ b/tmp/changes.txt
@@ -97,7 +97,6 @@ void AddIfBlockToGraph(
  return;
}

-
GraphAndMapping Stitch(PartitioningCtx* ctx, torch::jit::Block* block) {
  auto new_g = std::make_shared<torch::jit::Graph>();

@@ -146,8 +145,7 @@ GraphAndMapping Stitch(PartitioningCtx* ctx, torch::jit::Block* block) {
    }
  }
  return {new_g, old_to_new_g};
-
-}
-}
-}
}
+} // namespace partitioning
+} // namespace core
+} // namespace torch_tensorrt
diff --git a/home/runner/work/TensorRT/TensorRT/core/partitioning/partitioningctx/PartitioningCtx.cpp b/tmp/changes.txt
index 4b8368d..3c1db64 100644
--- a/home/runner/work/TensorRT/TensorRT/core/partitioning/partitioningctx/PartitioningCtx.cpp
+++ b/tmp/changes.txt
@@ -38,7 +38,7 @@ void PartitioningCtx::setNodeExecutorDecision(torch::jit::Node* n, NodeExecutorD
  // NOTE: This is this way due to partitioning.cpp L#134 I dont know if this is what we should do.

  auto result = node_executor_decision_map[n] = decision;
-  return ;
+  return;
}

bool PartitioningCtx::shouldNodeRunInTorch(torch::jit::Node* n) {
ERROR: Some files do not conform to style guidelines

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code conforms to Python style guidelines

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are some changes that do not conform to C++ style guidelines:

diff --git a/home/runner/work/TensorRT/TensorRT/core/partitioning/partitioning.cpp b/tmp/changes.txt
index 86cfd6d..53b9f22 100644
--- a/home/runner/work/TensorRT/TensorRT/core/partitioning/partitioning.cpp
+++ b/tmp/changes.txt
@@ -454,7 +454,6 @@ void Partition(PartitioningCtx* ctx, ExampleIValues& example_tensor_map) {
    LOG_DEBUG("Registering input/output torch::jit::Value for segmented graphs");
    RegisterSegmentsOutputs(ctx, block);

-
    // run shape analysis on each segmented block
    RunShapeAnalysis(ctx, block, example_tensor_map);
  }
ERROR: Some files do not conform to style guidelines

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code conforms to Python style guidelines

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code conforms to C++ style guidelines

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code conforms to Python style guidelines

Signed-off-by: Dheeraj Peri <[email protected]>
Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code conforms to C++ style guidelines

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code conforms to Python style guidelines

core/partitioning/BUILD Show resolved Hide resolved
core/partitioning/partitioning.cpp Outdated Show resolved Hide resolved
@peri044 peri044 removed the WIP Work is in progress, pull request should not be merged yet label Sep 22, 2022
Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code conforms to Python style guidelines

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code conforms to C++ style guidelines

@peri044 peri044 marked this pull request as ready for review September 22, 2022 17:51
@peri044 peri044 merged commit 24172f0 into master Sep 22, 2022
Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code conforms to Python style guidelines

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code conforms to C++ style guidelines

@bowang007 bowang007 deleted the partitioning_ctx branch December 7, 2022 04:14
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cla signed component: api [C++] Issues re: C++ API component: api [Python] Issues re: Python API component: core Issues re: The core compiler component: lowering Issues re: The lowering / preprocessing passes component: partitioning component: tests Issues re: Tests release: v1.3 Tagged to be included in v1.3
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants