Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

[MXNET-1450] Improve the backward mirroring implementation #18228

Merged
merged 1 commit into from
May 21, 2020

Conversation

ArmageddonKnight
Copy link
Contributor

@ArmageddonKnight ArmageddonKnight commented May 4, 2020

Description

This PR improves the backward mirroring implementation. Specifically, it takes into account for each (group of) operator node whether doing backward mirroring can be truly benefitial to the total memory footprint (please refer to test case #1 and #2 below). It also considers the data dependencies between the forward node and its corresponding gradient node. This is because it is possible for the feature maps of a layer to be recomputed without recomputing the layer itself (e.g., the Fully-Connected layer, test case #3). Those improvements allow us to further optimize the memory consumption of our DNN training models.

Checklist

Essentials

Please feel free to remove inapplicable items for your PR.

  • The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to the relevant JIRA issue created (except PRs with tiny changes)
  • Changes are complete (i.e. I finished coding on this PR)
  • All changes have test coverage:
  • Unit tests are added for small changes to verify correctness (e.g. adding a new operator)
  • Nightly tests are added for complicated/long-running ones (e.g. changing distributed kvstore)
  • Build tests will be added for build configuration changes (e.g. adding a new build option with NCCL)
  • Code is well-documented:
  • For user-facing API changes, API doc string has been updated.
  • For new C++ functions in header files, their functionalities and arguments are documented.
  • For new examples, README.md is added to explain the what the example does, the source of the dataset, expected performance on test set and reference to the original paper if applicable
  • Check the API doc at https://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
  • To the best of my knowledge, examples are either not affected by this change, or have been fixed to be compatible with this change

Changes

  • Backward Mirroring Improvements
  • Test Case Add some ops #1: RNN Cell
    • In the following graphs, we will be using red arrows to denote backward dependencies (a.k.a. feature maps), which usually are the main contributor of the memory consumption.
    • In the example below that mimics an RNN cell, we shall NOT be doing backward mirroring, because otherwise the total amount of feature maps storage will be doubled.

  • Test Case Update dev branch #3: Fully-Connected Layer
    • In the example below that uses a red node to denote a compute-heavy layer whose gradients are not dependent on the output data entries (e.g., the Fully-Connected layer). The red node can also be put on the mirror path. This enables us to relieve the backward dependency on its feature maps (i.e., inputs) without incurring significant performance overhead.

Comments

FYI, @eric-haibin-lin @szha

@mxnet-bot
Copy link

Hey @ArmageddonKnight , Thanks for submitting the PR
All tests are already queued to run once. If tests fail, you can trigger one or more tests again with the following commands:

  • To trigger all jobs: @mxnet-bot run ci [all]
  • To trigger specific jobs: @mxnet-bot run ci [job1, job2]

CI supported jobs: [website, edge, unix-cpu, miscellaneous, unix-gpu, windows-gpu, sanity, windows-cpu, clang, centos-cpu, centos-gpu]


Note:
Only following 3 categories can trigger CI :PR Author, MXNet Committer, Jenkins Admin.
All CI tests must pass before the PR can be merged.

@ArmageddonKnight ArmageddonKnight force-pushed the bojian/Echo-Contrib branch 2 times, most recently from 2509aaf to ade545d Compare May 4, 2020 07:32
@leezu leezu requested a review from eric-haibin-lin May 4, 2020 18:35
@eric-haibin-lin eric-haibin-lin self-assigned this May 4, 2020
@apeforest apeforest self-assigned this May 5, 2020
docs/static_site/src/pages/api/faq/env_var.md Outdated Show resolved Hide resolved
docs/static_site/src/pages/api/faq/env_var.md Outdated Show resolved Hide resolved
src/executor/graph_executor.cc Outdated Show resolved Hide resolved
@ArmageddonKnight
Copy link
Contributor Author

@mxnet-bot run ci [centos-gpu]

@mxnet-bot
Copy link

Jenkins CI successfully triggered : [centos-gpu]

Copy link
Contributor

@apeforest apeforest left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Thanks

Copy link
Member

@eric-haibin-lin eric-haibin-lin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ArmageddonKnight would you mind sharing some performance result with this feature enabled?

@ArmageddonKnight
Copy link
Contributor Author

ArmageddonKnight commented May 19, 2020

@ArmageddonKnight would you mind sharing some performance result with this feature enabled?

@eric-haibin-lin According to our evaluation on a single machine with RTX 2080 Ti, the performance overhead of training ResNet-152 with a batch size of 152 is 6%.

@eric-haibin-lin eric-haibin-lin merged commit 4827de8 into apache:master May 21, 2020
@sxjscience
Copy link
Member

Is there a way to use it for Gluon?

@ArmageddonKnight ArmageddonKnight deleted the bojian/Echo-Contrib branch May 26, 2020 04:33
@ArmageddonKnight
Copy link
Contributor Author

Hi @sxjscience , sorry for the late reply. It is possible, but because the current Gluon backend does not have mirroring involved, as can be seen here:

https://github.com/apache/incubator-mxnet/blob/3efacd27f75e38e06151675407b0f17e3c1891a5/src/imperative/cached_op.h#L168-L171

Enabling backward mirroring currently has no effect on Gluon.

AntiZpvoh pushed a commit to AntiZpvoh/incubator-mxnet that referenced this pull request Jul 6, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants