Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make inlining even more modular #2004

Merged
merged 7 commits into from
Sep 28, 2022
Merged

Make inlining even more modular #2004

merged 7 commits into from
Sep 28, 2022

Conversation

zasdfgbnm
Copy link
Collaborator

@zasdfgbnm zasdfgbnm commented Sep 28, 2022

I don't like TensorView::setComputeAt and TensorView::setMaxProducer, they are private, and I can not use them conveniently. It would be better if there is some public method of TensorView that allows directly setting the CA position of a TV with necessary validation. So I added two public methods TensorView::inlineAt and TensorView::updateMaxProducerPosition and removed TensorView::setComputeAt and TensorView::setMaxProducer.

The inlineAt can be safely used publicly. It will not inline into disallowed dimensions, and the max producer position will be kept consistent. There are two ways of using inlineAt:

If you only want to set the CA position of a single tensor, then simply do

tv->inlineAt(pos, /*best_effort=*/true);

If you want to set the CA position of multiple tensors, then you can do

MaxPosCalculator calc;
for (auto tv : tensors) {
  tv->inlineAt(pos, /*best_effort=*/true, &calc);
}

In both case, the max producer position will be updated at the end of the inlineAt call. Manually constructing the object of MaxPosCalculator is mainly for performance reasons: we don't want to build unmappable dimensions every time we call inlineAt. If we want to inline multiple tensors, we should build at the beginning and use it in all inlineAt calls.

Even though inlineAt always updates the max producer position automatically, there are still cases where we want to manually trigger an update of the max producer position, and the updateMaxProducerPosition is designed for such a purpose. It is mainly used for grouped reductions.

With inlineAt, I can refactor inlining to make it even more modular:

There is no longer an InlinePropagator. Innermost inlining is now just a dumb for loop:

MaxPosCalculator calc;
for (auto tv : all_tvs) {
  tv->inlineAt(-1, /*best_effort=*/true, &calc);
}

For standard and best effort inlining, we need first to do a propagation to find the positions in each tensor mapped to the given reference tensor's given position. With the positions calculated, inlining is again a dumb for loop.

@@ -11522,7 +11522,7 @@ TEST_F(NVFuserTest, FusionNonUniqueBroadcastSize_CUDA) {
fusion.addInput(tv1);
fusion.addInput(tv2);

auto tv3 = broadcast(tv0, {false, true});
auto tv3 = broadcast(tv0, {true, false});
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When doing {false, true}, the inlineAt will try to set CA position to 1 instead of 2 due to hoisting, which will not be a problem.

namespace fuser {
namespace cuda {

class MaxPosCalculator {
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The code in this class is copy-pasted from inline_propagator.cpp with only a few trivial changes.

// Try to find the aligned position on consumer's domain corresponding to the
// compute at position of producer domain. No checking on actual
// producer-consumer relationship.
unsigned int getConsumerPosAlignedToProducerCA(
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

copy-pasted from inline_propagator.cpp with only a few trivial changes.


// Find the positions of `selected` tensors that is mapped to the given position
// in the reference tensor.
class FindMappedPositions : public MaxInfoSpanningTree::Propagator {
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Modified from InlinePropagator

@naoyam
Copy link
Collaborator

naoyam commented Sep 28, 2022

@naoyam naoyam closed this Sep 28, 2022
@naoyam naoyam reopened this Sep 28, 2022
@naoyam
Copy link
Collaborator

naoyam commented Sep 28, 2022

I'm sorry I just accidentally closed this PR

@naoyam
Copy link
Collaborator

naoyam commented Sep 28, 2022

If you want to set the CA position of multiple tensors, then you can do

InlineBatchingGuard guard;
for (auto tv : tensors) {
  tv->inlineAt(pos, /*best_effort=*/true);
}

So, does this mean that the producer positions of the tensors could be in an inconsistent state until guard becomes out of scope?

@zasdfgbnm
Copy link
Collaborator Author

If you want to set the CA position of multiple tensors, then you can do

InlineBatchingGuard guard;
for (auto tv : tensors) {
  tv->inlineAt(pos, /*best_effort=*/true);
}

So, does this mean that the producer positions of the tensors could be in an inconsistent state until guard becomes out of scope?

Yes. Hmmm... looks like I don't have to make it like this. I will update it. I did so in the inline propagator as a tiny optimization to save some computation. For example:

tv1  tv2
  \  /
  tv3

I wanted to update tv3's max producer at the end of the propagation, instead of every time we update the inline position of tv1/tv2. But seems that the computation of max producer isn't big at all, should be just a portion of getMaxPosAll, so there is no longer value for having such tiny optimization.

@zasdfgbnm
Copy link
Collaborator Author

Hmmm... looks like I don't have to make it like this. I will update it.

Updated.

@naoyam
Copy link
Collaborator

naoyam commented Sep 28, 2022

What does InlineBatchingGuard do then?

@zasdfgbnm
Copy link
Collaborator Author

zasdfgbnm commented Sep 28, 2022

What does InlineBatchingGuard do then?

Just added a comment into the code:

// This class is mainly for getting better performance when we want to inline
// multiple tensors in the same fusion: we don't want to build the unmappable
// dimensions every time we call inlineAt. If we want to inline multiple
// tensors, we should build at the beginning and use it in all inlineAt calls.

@naoyam
Copy link
Collaborator

naoyam commented Sep 28, 2022

What does InlineBatchingGuard do then?

Just added a comment into the code:

// This class is mainly for getting better performance when we want to inline
// multiple tensors in the same fusion: we don't want to build the unmappable
// dimensions every time we call inlineAt. If we want to inline multiple
// tensors, we should build at the beginning and use it in all inlineAt calls.

OK, I see. So, it's for keeping non-inlinable IDs around.

I'm concerned about the implicit behavior of the guard. For example, when tv->inlineAt(pos) is done, its exact behavior depends on what the current guard has as non inlinable, which means the exact behavior could depend on the runtime call path to the call site of the inlineAt, and just looking at the code isn't enough.

The FusionGuard also has the same behavior, but we generally only deal with one Fusion instance at time, this isn't a big deal for FusionGuard.

How about explicitly passing around the information held by InlineBatchGuard rather than keeping it at the global scope? I agree that the latter is more convenient, but I think the former is more robust, and I'm not sure the convenience is more important the robustness in this case.

Comment on lines 94 to 115
TORCH_CUDA_CU_API void inlineMost(
Fusion* fusion,
const std::unordered_set<IterDomain*>& uninlinable_ids = {});
TORCH_CUDA_CU_API void inlineMost(
const std::vector<TensorView*>& tvs,
const std::unordered_set<IterDomain*>& uninlinable_ids = {});
TORCH_CUDA_CU_API void inlineMost(
const std::unordered_set<TensorView*>& tvs,
const std::unordered_set<IterDomain*>& uninlinable_ids = {});

TORCH_CUDA_CU_API void inlineAllAt(
TensorView* reference_tv,
int64_t reference_pos,
bool best_effort = false,
const std::unordered_set<IterDomain*>& uninlinable_ids = {});

TORCH_CUDA_CU_API void inlineSelectedAt(
const std::unordered_set<TensorView*>& selected,
TensorView* reference_tv,
int64_t reference_pos,
bool best_effort = false,
const std::unordered_set<IterDomain*>& uninlinable_ids = {});
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you please add comments? Just brief ones would be fine.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added


std::unique_ptr<InlineBatchingGuard> maybe_guard;
if (InlineBatchingGuard::getCurrentGuard() == nullptr) {
maybe_guard = std::make_unique<InlineBatchingGuard>(fusion());
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is maybe_guard ever used?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is now std::unique_ptr<MaxPosCalculator>, which creates an object of MaxPosCalculator if not provided.

}

TORCH_INTERNAL_ASSERT(
fusion() == InlineBatchingGuard::getCurrentGuard()->getFusion(),
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This would segfault when getCurrentGuard() is nullptr.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is removed now. Then entire InlineBatchingGuard is removed

@zasdfgbnm
Copy link
Collaborator Author

zasdfgbnm commented Sep 28, 2022

How about explicitly passing around the information held by InlineBatchGuard rather than keeping it at the global scope?

I think doing so makes sense. Yes, it will be inconvenient, but I am also planning to do the following things in later PRs, which will remove the inconvenience:

  1. The detection of unmappable dims could be made local. For example, when we do tv->inlineAt(pos), we only need to look at all paths from tv to all its consumers, no need to traverse the entire DAG. If I implement this logic, then inlineAt will become completely local, and the cost of computing unmappable dims at each inlineAt call will be acceptable.
  2. We should be safe to inline into IDs mapped to trivial reductions, so the uninlinable_ids parameter of MaxPosCalculator could be removed, with this change, MaxPosCalculator will become dataless, so nothing needs to be passed.

So for now, let's just take the robust way, and leave the improvement of convenience to the future PR.


IrBuilder::create<GroupedReductionOp>(
container, op_types, init_vals, outputs, inputs);

for (auto output : ir_utils::filterByType<TensorView>(outputs)) {
output->updateMaxProducerPosition();
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks cleaner than before. Thanks for the refactoring.

@naoyam
Copy link
Collaborator

naoyam commented Sep 28, 2022

I'm done with reviewing the changes. Overall, the new design looks much cleaner. Thanks for the refactoring. No other comments than posted above.

@zasdfgbnm
Copy link
Collaborator Author

@naoyam I have resolved all review comments. Thanks for reviewing! I feel that the code is much cleaner after making the changes according to the review.

Copy link
Collaborator

@naoyam naoyam left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Thanks for addressing the comments!

@zasdfgbnm zasdfgbnm merged commit a43cb20 into devel Sep 28, 2022
@zasdfgbnm zasdfgbnm deleted the inlineing-refactor branch September 28, 2022 22:30
jjsjann123 added a commit that referenced this pull request Nov 9, 2022
Syncing nvfuser devel branch to upstream master. https://github.com/csarofeen/pytorch/

Codegen changes include:

* codegen improvement:
    i. allow non-root trivial reductions, allow empty/no-op fusion
    ii. fixes vectorization checks and size calculation
    iii. bank conflict handle improvement
    iv. enables transpose scheduler

* misc:
    i. CI tests failure fixes
    ii. cpp tests file clean up
    iii. trivial forwarding supports added in codegen runtime
    iv. added factory methods support in codegen

Commits that's in this PR from the devel branch:

```
7117a7e patching nvfuser conv cudnn test numerics mismatch (#2048)
65af1a4 Inserting sync for redundant parallel types is already done at the (#2023)
6ac74d1 Fix sync map (#2047)
f5bca33 Bank conflict checker improvements (#2032)
d2ca7e3 Minor update on cp.async code generation. (#1901)
d36cf61 Test file cleanup (#2040)
0b8e83f Allow non-root trivial reductions (#2037)
a2dfe40 Fix vectorize size calculation (#2035)
e040676 Use withPredicate to replace setPredicate to maintain Exprs immutable (#2025)
197221b removing ci workflow (#2034)
40e2703 Reduction rand like patch (#2031)
bc77266 Add utility for checking bank conflict of shared memory (#2029)
ddd1cf7 Add back FusionReductionWithTrivialReduction_CUDA (#2030)
fbd97e5 Revert "Cleanup trivial reduction workarounds (#2006)" (#2024)
bca20c1 Cleanup trivial reduction workarounds (#2006)
e4b6585 Trivial forwarding (#1995)
1a0e355 Fix contiguity analysis of predicates to match updated contiguity. (#1991)
a4effa6 Enable output allocation cache (#2010)
35440b7 Patching bn inference (#2016)
0f9f0b4 Add matmul benchmark (#2007)
45045cd Enable tests previously disabled due to an aliasing bug (#2005)
967aa77 Contiguous indexing for View operations (#1990)
a43cb20 Make inlining even more modular (#2004)
dc45835 Test util cleanup (#2003)
3ca21eb More strict validation (#2000)
a7a7d57 Fix build problem (#1999)
fc235b0 Just fixes comments (#1998)
482386c cleanup (#1997)
4cbe0db Improve divisible split detection (#1970)
42ccc52 Minor build fix. (#1996)
fcf8c09 Cleanup of lower_utils.cpp: Isolate out GpuLower usage (#1989)
15f2f6d Move ConcretizedBroadcastDomains to shared_ptr in GpuLower. (#1988)
8f1c7f5 Minor cleanup lower_unroll.cpp (#1994)
1d9858c Minor cleanup (#1992)
f262d9c Add support for uniform RNG (#1986)
eb1dad1 Remove non-const functions, remove GpuLower instance on build, pass in ca_map. (#1987)
634820c Add support for some empty fusion (#1981)
eabe8d8 Segment self mapping fusions (#1954)
e96aacf Enable Transpose operation (#1882)
425dce2 Add a null scheduler that helps segmenting away no-op schedules (#1835)
306d4a6 Fix canScheduleCompileTime check of transpose scheduler (#1969)
b1bd32c Minor fix (#1967)
bd93578 Enable transpose scheduler (#1927)
b7a206e Move scheduler vectorize utilities into their own file (#1959)
d9420e4 View scheduling (#1928)
c668e13 Upstream push ci fixes (#1965)
c40202b Fix dump effective bandwidth (#1962)
93505bc WAR on index mapping when exact and permissive maps differ (#1960)
45e95fd Allow splitting inner-most ID to create virtual innermost ID in transpose scheduler (#1930)
a3ecb33 Improve the comments at the beginning of index_compute.h (#1946)
f7bc341 Remove unused variables (#1955)
df3393a Some cleanup (#1957)
7d1d7c8 TVDomainGuard factory (#1953)
357ba22 Fill allocation with nan on tests (#1956)
8eafc54 Fix detection of unmappable root domains (#1952)
90a51f2 Some indexing cleanups, Add eye support (#1940)
ddc01e4 Exclude unsupported data types (#1951)
992e17c test the groups the same order as they are merged (#1949)
208262b Move detection of self mapping IDs to IterDomainGraph from (#1941)
ac4de38 Merge pull request #1945 from csarofeen/master_merge_0828
6310948 Add full, full_like, zeros, zeros_like, ones, ones_like (#1943)
aab10bc Merge remote-tracking branch 'upstream/viable/strict' into HEAD
4c254c0 Fix arange when step is negative (#1942)
89330aa Tensor factories must set the output shape as its input (#1939)
```

RUN_TORCHBENCH: nvfuser

Differential Revision: [D40869846](https://our.internmc.facebook.com/intern/diff/D40869846)
Pull Request resolved: pytorch#87779
Approved by: https://github.com/davidberard98
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants