Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Wait for #2567] [ Test ] Mixed Precision Test Case #2568

Closed
wants to merge 5 commits into from

Conversation

jijoongmoon
Copy link
Collaborator

In this PR

This PR includes the mixed precision test case.

. Input - FC - MSE
: "batch_size=2", "model_tensor_type=FP16-FP16", "loss_scale=128"

Self evaluation:

  1. Build test: [X]Passed [ ]Failed [ ]Skipped
  2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon [email protected]

We will add Var32 Tensor if the Variable Weight is not Full
precision (FP32). This eables the Weight Update with full precision
and only Apply Gradient Process ueses this Tensor. Therefore, the
lifespan of this tensor should be "ApplyGradient".

. Modify TensorPool to generate Weigth considering Mixed Precsion.

**Self evaluation:**
1. Build test:	 [X]Passed [ ]Failed [ ]Skipped
2. Run test:	 [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <[email protected]>
This pr create the variable fp32 tensor when we create the Weight and
Optimizer Weight.

. update the manager to create Weight with  var32 tensor which
requested to weight pool.
. update the weight requests with Weight Spec and var, grad and var32
tensors which created already.
. add clone Tensor with specific type in tensor.h

Resolves:

**Self evaluation:**
1. Build test:	 [X]Passed [ ]Failed [ ]Skipped
2. Run test:	 [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <[email protected]>
This PR enables the FP16 support for the layers below:

. input layer
. mse loss layer

Resolves:

**Self evaluation:**
1. Build test:	 [X]Passed [ ]Failed [ ]Skipped
2. Run test:	 [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <[email protected]>
@taos-ci
Copy link
Collaborator

taos-ci commented May 7, 2024

📝 TAOS-CI Version: 1.5.20200925. Thank you for submitting PR #2568. Please a submit 1commit/1PR (one commit per one PR) policy to get comments quickly from reviewers. Your PR must pass all verificiation processes of cibot before starting a review process from reviewers. If you are new member to join this project, please read manuals in documentation folder and wiki page. In order to monitor a progress status of your PR in more detail, visit http://ci.nnstreamer.ai/.

@taos-ci
Copy link
Collaborator

taos-ci commented May 7, 2024

:octocat: cibot: @jijoongmoon, test/unittest/models/unittest_models_mixed_precision.cpp does not include Doxygen tags such as @file @brief @author @bug. You must include the Doxygen tags in the source code. Please refer to a Doxygen manual at http://github.com/nnstreamer/TAOS-CI/blob/main/ci/doc/doxygen-documentation.md

@taos-ci
Copy link
Collaborator

taos-ci commented May 7, 2024

:octocat: cibot: @jijoongmoon, A builder checker could not be completed because one of the checkers is not completed. In order to find out a reason, please go to http://ci.nnstreamer.ai/nntrainer/ci/repo-workers/pr-checker/2568-202405071341380.1280529499054-6ac8d7c9339f11810216253df4a2cbb606606f7e/.

@jijoongmoon jijoongmoon force-pushed the mixed_test branch 2 times, most recently from 49f4996 to b51af11 Compare May 7, 2024 06:46
is_inplace = true;

/**
* @note Input Layer assuems that the FP32 IN Tensor always. Therefore, if the
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

typo assume ?

Copy link
Collaborator

@taos-ci taos-ci left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jijoongmoon, 💯 All CI checkers are successfully verified. Thanks.

This PR includes the mixed precision test case.

. Input - FC - MSE
 : "batch_size=2", "model_tensor_type=FP16-FP16", "loss_scale=128"

**Self evaluation:**
1. Build test:	 [X]Passed [ ]Failed [ ]Skipped
2. Run test:	 [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <[email protected]>
Copy link
Collaborator

@taos-ci taos-ci left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jijoongmoon, 💯 All CI checkers are successfully verified. Thanks.

This commit modify apply gradient in optimizer.
We do not need to save optimizer variables in weight type. Only
Optimizer needs the optimizer variables and we should update the
weight with full precision to maintain the accuracy. Therefore,
remove the var32 tensors for optimizer variables.

Resolves:

**Self evaluation:**
1. Build test:	 [X]Passed [ ]Failed [ ]Skipped
2. Run test:	 [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <[email protected]>
Copy link
Collaborator

@taos-ci taos-ci left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jijoongmoon, 💯 All CI checkers are successfully verified. Thanks.

DonghakPark added a commit to DonghakPark/nntrainer that referenced this pull request May 10, 2024
This PR is to update the mixed precision layer.
- integrate nnstreamer#2568 & nnstreamer#2455
- will update more test

**Self evaluation:**
1. Build test:	 [X]Passed [ ]Failed [ ]Skipped
2. Run test:	 [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Donghak PARK <[email protected]>
DonghakPark added a commit to DonghakPark/nntrainer that referenced this pull request May 10, 2024
This PR is to update the mixed precision layer.
- integrate nnstreamer#2568 & nnstreamer#2455
- will update more test

**Self evaluation:**
1. Build test:	 [X]Passed [ ]Failed [ ]Skipped
2. Run test:	 [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Donghak PARK <[email protected]>
DonghakPark added a commit to DonghakPark/nntrainer that referenced this pull request May 10, 2024
This PR is to update the mixed precision layer.
- integrate nnstreamer#2568 & nnstreamer#2455
- will update more test

**Self evaluation:**
1. Build test:	 [X]Passed [ ]Failed [ ]Skipped
2. Run test:	 [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Donghak PARK <[email protected]>
DonghakPark added a commit to DonghakPark/nntrainer that referenced this pull request May 10, 2024
This PR is to update the mixed precision layer.
- integrate nnstreamer#2568 & nnstreamer#2455
- will update more test

**Self evaluation:**
1. Build test:	 [X]Passed [ ]Failed [ ]Skipped
2. Run test:	 [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Donghak PARK <[email protected]>
DonghakPark added a commit to DonghakPark/nntrainer that referenced this pull request May 10, 2024
This PR is to update the mixed precision layer.
- integrate nnstreamer#2568 & nnstreamer#2455
- will update more test

**Self evaluation:**
1. Build test:	 [X]Passed [ ]Failed [ ]Skipped
2. Run test:	 [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Donghak PARK <[email protected]>
DonghakPark added a commit to DonghakPark/nntrainer that referenced this pull request May 16, 2024
This PR is to update the mixed precision layer.
- integrate nnstreamer#2568 & nnstreamer#2455
- will update more test

**Self evaluation:**
1. Build test:	 [X]Passed [ ]Failed [ ]Skipped
2. Run test:	 [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Donghak PARK <[email protected]>
DonghakPark added a commit to DonghakPark/nntrainer that referenced this pull request May 17, 2024
This PR is to update the mixed precision layer.
- integrate nnstreamer#2568 & nnstreamer#2455
- will update more test

**Self evaluation:**
1. Build test:	 [X]Passed [ ]Failed [ ]Skipped
2. Run test:	 [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Donghak PARK <[email protected]>
DonghakPark added a commit to DonghakPark/nntrainer that referenced this pull request May 27, 2024
This PR is to update the mixed precision layer.
- integrate nnstreamer#2568 & nnstreamer#2455
- will update more test

**Self evaluation:**
1. Build test:	 [X]Passed [ ]Failed [ ]Skipped
2. Run test:	 [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Donghak PARK <[email protected]>
@@ -1680,6 +1680,13 @@ class Tensor {
*/
Tensor clone() const;

/**
* @brief Convient wrapper for inplace copy of @a this.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
* @brief Convient wrapper for inplace copy of @a this.
* @brief Convenient wrapper for inplace copy of @a this.

Is it typo? Do you mean convenient ?

@@ -114,6 +114,7 @@ class Weight : public Var_Grad {
*
* @param v Already created variable object
* @param g Already created gradient object
* @param v32 Already created gradient object
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
* @param v32 Already created gradient object
* @param v32 Already created variable32 object

@jijoongmoon
Copy link
Collaborator Author

closed by #2663

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants