Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update djl #200

Closed
wants to merge 1 commit into from
Closed

Update djl #200

wants to merge 1 commit into from

Conversation

renovate[bot]
Copy link

@renovate renovate bot commented Apr 5, 2021

WhiteSource Renovate

This PR contains the following updates:

Package Change Age Adoption Passing Confidence
ai.djl.mxnet:mxnet-engine (source) 0.3.0 -> 0.11.0 age adoption passing confidence
ai.djl.mxnet:mxnet-native-auto (source) 1.6.0 -> 1.8.0 age adoption passing confidence
ai.djl:api (source) 0.3.0 -> 0.11.0 age adoption passing confidence
ai.djl:model-zoo (source) 0.3.0 -> 0.11.0 age adoption passing confidence

Release Notes

deepjavalibrary/djl

v0.11.0

DJL v0.11.0 brings the new engines XGBoost 1.3.1, updates PyTorch to 1.8.1, TensorFlow to 2.4.1, Apache MXNet 1.8.0, PaddlePaddle to 2.0.2 and introduces several new features:

Key Features
  • Supports XGBoost 1.3.1 engine inference: now you can run prediction using models trained in XGBoost.
  • Upgrades PyTorch to 1.8.1 with CUDA 11.1 support.
  • Upgrades TensorFlow to 2.4.1 with CUDA 11.0 support.
  • Upgrades Apache MXNet to 1.8.0 with CUDA 11.0 support.
  • Upgrades PaddlePaddle to 2.0.2.
  • Upgrades SentencePiece to 0.1.95.
  • Introduces the djl-serving brew package: now you can install djl-serving with brew install djl-serving.
  • Introduces the djl-serving plugins.
  • Introduces Amazon Elastic Inference support.
Enhancement
  • Improves TensorFlow performance by reducing GC and fixed memory leaking issue (#​892)
  • djl-serving now can run all the engines out-of-box (#​886)
  • Improves DJL training by using multi-threading on each GPU (#​743)
  • Implements several operators:
    • Adds boolean set method to NDArray (#​784)
    • Adds batch dot product operator (#​849)
    • Adds norm operator to PyTorch (#​692)
    • Adds one hot operator (#​684)
    • Adds weight decay to Loss (#​788)
  • Adds setGraphExecutorOptimize option for PyTorch engine. (#​904)
  • Introduces String tensor support for ONNXRuntime (#​724)
  • Introduces several API improvements
    • Creates ObjectDetectionDataset (#​683)
    • Improves Block usability (#​712)
    • Adds BlockFactory feature in model loading (#​805)
    • Allows PyTorch stream model loading (#​729)
    • Adds NDList decode from InputStream (#​734)
    • Adds SymbolBlock Serialization (#​687)
  • Introduces model searching feature in djl central (#​799)
Documentation and examples
Breaking change
  • Renames CheckpointsTrainingListener to SaveModelTrainingListener (#​686)
  • Removes erroneous random forest application (#​726)
  • Deletes DataManager class (#​691)
  • Classes under ai.djl.basicdataset packages has been moved into each sub-packages.
Bug Fixes
  • Fixes BufferOverflowException when handling handling subimage (#​866)
  • Fixes ONNXRuntime 2nd engine dependency from IrisTranslator (#​853)
  • Fixes sequenceMask error when n dimension is 2 (#​828)
  • Fixes TCP port range buf in djl-serving (#​773)
  • Fixes one array case for concat operator (#​739)
  • Fixes non-zero operator for PyTorch (#​704)
Known issues
Contributors

This release is thanks to the following contributors:

v0.10.0

DJL v0.10.0 brings the new engines PaddlePaddle 2.0 and TFLite 2.4.1, updates PyTorch to 1.7.1, and introduces several new features:

Key Features
  • Supports PaddlePaddle 2.0 engine inference: now you can run prediction using models trained in PaddlePaddle.

  • Introduces the PaddlePaddle Model Zoo with new models. Please see examples for how to run them.

  • Upgrades TFLite engine to v2.4.1. You can convert TensorFlow SavedModel to TFLite using this converter.

  • Introduces DJL Central to easily browse and view models available in DJL’s ModelZoo.

  • Introduces generic Bert Model in DJL (#​105)

  • Upgrades PyTorch to 1.7.1

Enhancement
  • Enables listing input and output classes in ModelZoo lookup (#​624)
  • Improves PyTorch performance by using PyTorch index over engine agnostic solution (#​638)
  • Introduces various fixes and improvements for MultiThreadedBenchmark (#​617)
  • Makes the default engine deterministic when multiple engines are in dependencies(#​603)
  • Adds norm operator (similar to numpy.linalg.norm.html) (#​579)
  • Refactors the DJL Trackers to use builder patterns (#​562)
  • Adds the NDArray stopGradient and scaleGradient functions (#​548)
  • Model Serving now supports scaling up and down (#​510)
Documentation and examples
  • Introduces DJL 101 Video Series on DJL Youtube Channel
  • Adds the documentation for Applications (#​673)
  • Adds an introduction for Engines (#​660)
  • Adds documents for DJL community and forums (#​646)
  • Adds documents on community leaders (#​572)
  • Adds more purpose to the block tutorial and miscellaneous docs (#​607)
  • Adds a DJL Paddle OCR Example (#​568)
  • Adds a TensorFlow amazon review Jupyter notebook example
Breaking change
  • Renames DJL-Easy to DJL-Zero (#​519)
  • Makes RNN operators generic across engines (#​554)
  • Renames CheckpointsTrainingListener to SaveModelTrainingListener (#​573)
  • Makes Initialization optional in training (#​533)
  • Makes SoftmaxCrossEntropyLoss's fromLogit flag mean inputs are un-normalized (#​639)
  • Refactors the Vocabulary builder API
  • Refactors the SymbolBlock with AbstractSymbolBlock (#​491)
Bug Fixes
  • Fixes the benchmark rss value #​656
  • Fixes the recurrent block memory leak and the output shape calculation (#​556)
  • Fixes the NDArray slice size (#​550)
  • Fixes #​493: verify-java plugin charset bug (#​496)
  • Fixes #​484: support arbitrary URL scheme for repository
  • Fixes AbstractBlock inputNames and the inputShapes mismatch bug
Known issues
  • The training tests fail on GPU and Windows CPU if 3 engines(MXNet, PyTorch, TensorFlow) are loaded and run together
Contributors

This release is thanks to the following contributors:

v0.9.0

DJL 0.9.0 brings MXNet inference optimization, abundant PyTorch new feature support, TensorFlow windows GPU support and experimental DLR engine that support TVM models.

Key Features
  • Add experimental DLR engine support. Now you can run TVM model with DJL
MXNet
  • Improve MXNet JNA layer by reusing String, String[] and PointerArray with object pool which reduce the GC time significantly
PyTorch
  • you can easily create COO Sparse Tensor with following code snippet
long[][] indices = {{0, 1, 1}, {2, 0, 2}};
float[] values = {3, 4, 5};
FloatBuffer buf = FloatBuffer.wrap(values);
manager.createCoo(FloatBuffer.wrap(values), indices, new Shape(2, 4));
  • If the input of your TorchScript model need List or Dict type, we now add simple one dimension support for you.
// assum your torchscript model takes model({'input': input_tensor})
// you tell us this kind of information by setting the name
NDArray array = manager.ones(new Shape(2, 2));
array.setName("input1.input");
  • we support loading ExtraFilesMap
// saving ExtraFilesMap
Criteria<Image, Classifications> criteria = Criteria.builder()
  ...
  .optOption("extraFiles.dataOpts", "your value")  // <- pass in here 
  ... 
TensorFlow
  • Windows GPU is now supported
Several Engines upgrade
Engine version
PyTorch 1.7.0
TensorFlow 2.3.1
fastText 0.9.2
Enhancement
  • Add docker file for serving
  • Add Deconvolution support for MXNet engine
  • Support PyTorch COO Sparse tensor
  • Add CSVDataset, you can find a sample usage here
  • Upgrade TensorFlow to 2.3.1
  • Upgrade PyTorch to 1.7.0
  • Add randomInteger operator support for MXNet and PyTorch engine
  • Add PyTorch Profiler
  • Add TensorFlow Windows GPU support
  • Support loading the model from jar file
  • Support 1-D list and dict input for TorchScript
  • Remove the Pointer class being used for JNI to relieve Garbage Collector pressure
  • Combine several BertVocabulary into one Vocabulary
  • Add loading the model from Path class
  • Support ExtraFilesMap for PyTorch model inference
  • Allow both int32 & int64 for prediction & labels in TopKAccuracy
  • Refactor MXNet JNA binding to reduce GC time
  • Improve PtNDArray set method to use ByteBuffer directly and avoid copy during tensor creation
  • Support experimental MXNet optimizeFor method for accelerator plugin.
Documentation and examples
  • Add Amazon Review Ranking Classification
  • Add Scala Spark example code on Jupyter Notebook
  • Add Amazon SageMaker Notebook and EMR 6.2.0 examples
  • Add DJL benchmark instruction
Bug Fixes
  • Fix PyTorch Android NDIndex issue
  • Fix Apache NiFi issue when loading multiple native in the same Java process
  • Fix TrainTicTacToe not training issue
  • Fix Sentiment Analysis training example and FixedBucketSampler
  • Fix NDArray from DataIterable not being attaching to NDManager properly
  • Fix WordPieceTokenizer infinite loop
  • Fix randomSplit dataset bug
  • Fix convolution and deconvolution output shape calculations
Contributors

Thank you to the following community members for contributing to this release:

Frank Liu(@​frankfliu)
Lanking(@​lanking520)
Kimi MA(@​kimim)
Lai Wei(@​roywei)
Jake Lee(@​stu1130)
Zach Kimberg(@​zachgk)
0xflotus(@​0xflotus)
Joshua(@​euromutt)
mpskowron(@​mpskowron)
Thomas(@​thhart)
DocRozza(@​docrozza)
Wai Wang(@​waicool20)
Trijeet Modak(@​uniquetrij)

v0.8.0

DJL 0.8.0 is a release closely following 0.7.0 to fix a few key bugs along with some new features.

Key Features
  • Search model zoo with criteria
  • Standard BERT transformer and WordpieceTokenizer for more BERT tasks
  • Simplify MRL and Remove Anchor
  • Simplify and Standardize CV Model
  • Improve Model describe input and output
  • String NDArray support (only for TensorFlow Engine)
  • Add erfinv operator support
  • MXNet 1.6.0 backward compatibility, now you can switch MXNet versions (1.6 and 1.7) using DJL 0.8.0
  • Combined pytorch-engine-precxx-11 and pytorch-engine package
  • Upgrade onnx runtime from 1.3.1 to 1.4.0
Documentation and examples
  • Object Detection with TensorFlow saved model example
  • Text Classification with TensorFlow BERT model example
  • Added more documentation on TensorFlow engine.
Bug Fixes
  • Fixed MXNet multithreading bug and updated multi-threading documentation
  • Fixed TensorFlow 2.3 native binaries for Windows platform
Known issues
  • You need to add your own Translator when loading image classification models(ResNet, MobileNet) from TensorFlow model Zoo, refer to the example here.
Contributors

Thank you to the following community members for contributing to this release:

Dennis Kieselhorst, Frank Liu, Jake Cheng-Che Lee, Lai Wei, Qing Lan, Zach Kimberg, uniquetrij

v0.7.0

DJL 0.7.0 brings SetencePiece for tokenization, GravalVM support for PyTorch engine, a new set of Nerual Network operators, BOM module, Reinforcement Learning interface and experimental DJL Serving module.

Key Features
  • Now you can leverage powerful SentencePiece to do text processing including tokenization, de-tokenization, encoding and decoding. You can find more details on extension/sentencepiece.
  • Engine upgrade:
    • MXNet engine: 1.7.0-backport
    • PyTorch engine: 1.6.0
    • TensorFlow: 2.3.0
  • MXNet multi-gpu training now is boosted by MXNet KVStore by default, which saves lots of overhead by GPU memory copy.
  • GraalVM are fully supported on both of regular execution and native image for PyTorch engine. You can find more details on GraalVM example.
  • Add a new set of Neural Network operators that offers capability of full controlling over parameters for CV domain, which is similar to PyTorch nn.functional module. You can find the operator method in its Block class.
Conv2d.conv2d(NDArray input, NDArray weight, NDArray bias, Shape stride, Shape padding, Shape dilation, int groups);
  • Bill of Materials (BOM) is introduced to manage the version of dependencies for you. In DJL, the engine you are using usually is tied to a specific version of native package. By easily adding BOM dependencies like this, you won’t worry about version anymore.
<dependency>
    <groupId>ai.djl</groupId>
    <artifactId>bom</artifactId>
    <version>0.7.0</version>
    <type>pom</type>
    <scope>import</scope>
</dependency>
implementation platform("ai.djl:bom:0.7.0")
  • JDK 14 now get supported
  • New Reinforcement Learning interface including RIAgent, RlEnv, etc, you can see a comprehensive TicTacToe example.
  • Support DJL Serving module. With only a single command, now you can enjoy deploying your model without bothering writing the server code or config like server proxy.
cd serving && ./gradlew run --args="-m https://djl-ai.s3.amazonaws.com/resources/test-models/mlp.tar.gz"
Documentation and examples
  • We wrote the D2L book from chapter 1 to chapter 7 with DJL. You can learn basic deep learning concept and classic CV model architecture with DJL. Repo
  • We launched a new doc website that hosts abundant documents and tutorials for quick search and copy-paste.
  • New Online Sentiment Analysis with Apache Flink.
  • New CTR prediction using Apache Beam and Deep Java Library(DJL).
  • New DJL logging configuration document which includes how to enable slf4j, switch to other logging libraries and adjust log level to debug the DJL.
  • New Dependency Management document that lists DJL internal and external dependencies along with their versions.
  • New CV Utilities document as a tutorial for Image API.
  • New Cache Management document is updated with more detail on different categories.dependency management.
  • Update Model Loading document to describe loading model from various sources like s3, hdfs.
Enhancement
  • Add archive file support to SimpleRepository
  • ImageFolder supports nested folder
  • Add singleton method for LambdaBlock to avoid redundant function reference
  • Add Constant Initializer
  • Add RMSProp, Adagrad, Adadelta Optimizer for MXNet engine
  • Add new tabular dataset: Airfoil Dataset
  • Add new basic dataset: CookingExchange, BananaDetection
  • Add new NumPy like operators: full, sign
  • Make prepare() method in Dataset optional
  • Add new Image augmentation APIs where you can add to Pipeline to enrich your image dataset
  • Add new handy fromNDArray to Image API for converting NDArray to Image object quickly
  • Add interpolation option for Image Resize operator
  • Support archive file for s3 repository
  • Import new SSD model from TensorFlow Hub into DJL model zoo
  • Import new Sentiment Analysis model from HuggingFace into DJL model zoo
Breaking changes
  • Drop CUDA 9.2 support for all the platforms including linux, windows
  • The arguments of several blocks are changed to align with the signature of other widely used Deep Learning frameworks, please refer to our Java doc site
  • FastText is no longer a full Engine, it becomes a part of NLP utilities in favor of FastTextWorkEmbedding
  • Move the WarmUp out from existing Tracking and introduce new WarmUpTracker
  • MxPredictor now doesn’t copy parameters by default, please make sure to use NaiveEngine when you run inference in multi-threading environment
Bug Fixes
  • Fixing Validation Epoch Result bug
  • Fix multiple process downloading the same model bug
  • Fix potential concurrent write bug while downloading metadata.json
  • Fix URI parsing error on Windows
  • Fix multi-gpu training crash when the number of the batch size is smaller than number of devices
  • Fix not setting number of inter-op threads for PyTorch engine
Contributors

Thank you to the following community members for contributing to this release:

Christoph Henkelmann, Frank Liu, Jake Cheng-Che Lee, Jake Lee, Keerthan Vasist, Lai Wei, Qing Lan, Victor Zhu, Zach Kimberg, aksrajvanshi, gstu1130, 蔡舒起

v0.6.0

DJL 0.6.0 brings stable Android support, ONNX Runtime experimental inference support, experimental training support for PyTorch.

Key Features
  • Stable Android inference support for PyTorch models
    • Provide abstraction for Image processing using ImageFactory
  • Experimental support for inference on ONNX models
  • Initial experimental training and imperative inference support for PyTorch engine
  • Experimental support for using multi-engine
  • Improved usage for NDIndex - support for ellipsis notation, arguments
  • Improvements to AbstractBlock to simplify custom block creation
  • Added new datasets
Documentation and examples
Breaking changes
  • ModelZoo Configuration changes
  • ImageFactory changes
  • Please refer to javadocs for minor API changes
Known issues
  • Issue with training with MXNet in multi-gpu instances
Contributors

Thank you to the following community members for contributing to this release:

Christoph Henkelmann, Frank Liu, Jake Lee, JonTanS, Keerthan Vasist, Lai Wei, Qing, Qing Lan, Victor Zhu, Zach Kimberg, ai4java, aksrajvanshi

v0.5.0

DJL 0.5.0 release brings TensorFlow engine inference, initial NLP support and experimental Android inference with PyTorch engine.

Key Features
  • TensorFlow engine support with TensorFlow 2.1.0
    • Support NDArray operations, TensorFlow model zoo, multi-threaded inference
  • PyTorch engine improvement with PyTorch 1.5.0
  • Experimental Android Support with PyTorch engine
  • MXNet engine improvement with MXNet 1.7.0
  • Initial NLP support with MXNet engine
    • Training LSTM models
    • Support various text/word embedding, Seq2Seq use cases
    • Added NLP datasets
  • New AWS-AI toolkit to integrate with AWS technologies
    • Load model from s3 buckets directly
  • Improved model-zoo with more models
Documentation and examples
Breaking changes
  • We moved our repository module under api module. There will be no 0.5.0 version for ai.djl.repository, use ai.djl.api instead.
  • Please refer to DJL Java Doc for some minor API changes.
Know issues:

v0.4.1

DJL 0.4.1 release includes an important performance Improvement on MXNet engine:

Performance Improvement:
  • Cached MXNet features. This will avoid MxNDManager.newSubManager() to repeatedly calling getFeature() which will make JNA calls to native code.
Known Issues:

Same as v0.4.0 release:

  • PyTorch engine doesn't fully support multithreaded inference. You may see random crashes. Single-threaded inference is not impacted. We expect to fix this issue in a future release.
  • We saw random crash on mac for “transfer Learning on CIFAR-10 Dataset” example on Jupyter Notebook. Command line all works.

v0.4.0

DJL 0.4.0 brings PyTorch and TensorFlow 2.0 inference support. Now you can use these engines directly from DJL with minimum code changes.

Note: TensorFlow 2.0 currently is in PoC stage, users will have to build from source to use it. We expect TF Engine finish in the future releases.

Key Features
  • Training improvement
    • Add InputStreamTranslator
  • Model Zoo improvement
    • Add LocalZooProvider
    • Add ListModels API
  • PyTorch Engine support
    • Use the new ai.djl.pytorch:pytorch-native-auto dependency for automatic engine selection and a simpler build/installation process
    • 60+ methods supported
  • PyTorch ModelZoo support
    • Image Classification models: ResNet18 and ResNet50
    • Object Detection model: SSD_ResNet50
  • TensorFlow 2.0 Engine support
    • Support on Eager Execution for imperative mode
    • 30+ methods support
  • TensorFlow ModelZoo support
    • Image Classification models: ResNet50, MobileNetV2
Breaking Changes

There are a few changes in API and ModelZoo packages to adapt to multi-engine support. Please follow our latest examples to update your code base from 0.3.0 to 0.4.0.

Known Issues
  1. PyTorch engine doesn't fully support multithreaded inference. You may see random crashes. Single-threaded inference is not impacted. We expect to fix this issue in a future release.
  2. We saw random crash on mac for “transfer Learning on CIFAR-10 Dataset” example on Jupyter Notebook. Command line all works.

Configuration

📅 Schedule: At any time (no schedule defined).

🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

♻️ Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

👻 Immortal: This PR will be recreated if closed unmerged. Get config help if that's undesired.


  • If you want to rebase/retry this PR, check this box.

This PR has been generated by WhiteSource Renovate. View repository job log here.

@renovate renovate bot changed the title Update djl Update djl - autoclosed Apr 5, 2021
@renovate renovate bot closed this Apr 5, 2021
@renovate renovate bot deleted the renovate/djl branch April 5, 2021 10:04
@renovate renovate bot changed the title Update djl - autoclosed Update djl Apr 6, 2021
@renovate renovate bot restored the renovate/djl branch April 6, 2021 23:01
@renovate renovate bot reopened this Apr 6, 2021
@renovate renovate bot force-pushed the renovate/djl branch from a061b76 to 1b546d5 Compare May 3, 2021 21:27
@Zomis Zomis closed this Dec 9, 2021
@Zomis Zomis deleted the renovate/djl branch December 29, 2021 23:10
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants