This repository has been archived by the owner on Feb 8, 2023. It is now read-only.
forked from intel-analytics/ipex-llm
-
Notifications
You must be signed in to change notification settings - Fork 20
bug fix: DLModel prediction #4
Merged
sperlingxx
merged 2 commits into
alibaba-archive:bugfix_DLModel_prediction
from
tosky001:predict_bug_fix
Jan 16, 2018
Merged
bug fix: DLModel prediction #4
sperlingxx
merged 2 commits into
alibaba-archive:bugfix_DLModel_prediction
from
tosky001:predict_bug_fix
Jan 16, 2018
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
fix wrong initial shape of the Linear DLModel
sperlingxx
approved these changes
Jan 16, 2018
@@ -91,6 +91,27 @@ class DLEstimatorSpec extends FlatSpec with Matchers with BeforeAndAfter { | |||
assert(correct > nRecords * 0.8) | |||
} | |||
|
|||
"An DLEstimator" should "throws exception when DLModel is predicting with DLModel.train=True" in { | |||
val model = new Sequential().add(Linear[Float](6, 4)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please check the outputSize of Linear
sperlingxx
added a commit
that referenced
this pull request
Feb 7, 2018
* bug fix: DLModel prediction (#4) Make sure DLModel.train=False when predicting in pipeline API * 1. broadcast transformer in DLModel.transform ; 2. remove useless ut
sperlingxx
pushed a commit
that referenced
this pull request
May 19, 2019
This feature enables mkl-dnn support, which can speed up deep learning model. We wrapper the native c api in the java, which are in BigDL-core projects. And in BigDL, we integrated the convolution, batchnorm, maxpooling, avgpooling, relu, lrn, softmax, caddtable and concattable. Currently, it supports create the model which only contains dnn layer or container. Because the data layout is optimized in mkl-dnn. The mkl-dnn model will use `DnnTensor` which contains the native buffer as a default tensor. So there're some notations, 1. User should copy the data from jvm heap at the first layer and copy back to jvm heap at the last layer. 2. User should compile the model, which contains the phase (training/inference) and input tensor size. It will infer and allocate the other information. * fix: linear performance issue and serialization of java object in MklDnnTensor * memory leak refactor * memory leak and bn performance issues 1. Memory Leak The internal buffer with MklDnnTensor should not be re-assigned without releasing. So we should check it first. At first iteration or after the changing of input size, we create a new MklDnnTensor as a buffer. 2. Bn perf The JIT BatchNormalization only supports avx2 or avx512, which has much batter performance than ref version. The input and gradOutput format should be the same to get the best performance. * test: add some test cases for BatchNorm. The computation of float value is not the same as C/C++/Native with JVM. And batch norm will make it much greater such as 10^-8 -> 10^-4 -> 10^-1 * fix: rebase with upstream master: 1. Concat and ConcatTable should inherit from DynamicContainer. 2. updateParameters has been depricated. 3. zeroGradParameters should be final. But from now on, the Linear should use it. 4. Some other syntax or semantic errors. * perf: single node and single model performance * perf: single model * feat: add fusion for mkl-dnn * test: add test utils to compare dnn output * test: add some tests compared with caffe * add unit tests for dnn tensor * add unit test for reorder memory * test: fix the test regression errors * checkin reorder manager * add backward for sequential * fix some bugs * update core ref * add unit tests * refactor: move the static class DataType, AlgKind and so on to standalone class (#4) * refactor: delete MklDnn.MemoryFormat * refactor: move the static class DataType, AlgKind and so on to standalone class * fix: core refactor errors * refactor: spec errors (#5) * Mkl dnn dev (#6) * checkin reorder manager * add container and refine reorder manager * fix merge issue * add join table forward * refine inteface (#7) * add LRN and ReLU * add pooling * refactor: conv + linear + bn * add JoinTable backward * refactor: conv + linear + bn * add cAddTable concattable * fix: reorder failed on some of convs * refactor: softmax * refactor: fusion support * refactor: resnet_50 * refactor: move tests to this branch * refactor: delete unusefull files and enable the special old tests. refactor: delete unsed methods in MklDnnOps fix: scalastyle check * fix: rebase with upstream * fix: ignore the prototxt tests * fix: do not change the core commit ref * fix: move set num of threads for mkldnn to ResNet50Perf * fix: serialization disabled for mkldnn module
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
What changes were proposed in this pull request?
Make sure DLModel.train=False when predicting in pipeline API
How was this patch tested?
unit tests