All Versions
Latest Version
Avg Release Cycle
36 days
Latest Release
1367 days ago

Changelog History

  • v0.8.0

    September 22, 2020
  • v0.7.0 Changes

    September 04, 2020

    πŸ‘ DJL 0.7.0 brings SetencePiece for tokenization, GravalVM support for PyTorch engine, a new set of Nerual Network operators, BOM module, Reinforcement Learning interface and experimental DJL Serving module.

    Key Features

    • Now you can leverage powerful SentencePiece to do text processing including tokenization, de-tokenization, encoding and decoding. You can find more details on extension/sentencepiece.
    • ⬆️ Engine upgrade:
      • MXNet engine: 1.7.0-backport
      • PyTorch engine: 1.6.0
      • TensorFlow: 2.3.0
    • 0️⃣ MXNet multi-gpu training now is boosted by MXNet KVStore by default, which saves lots of overhead by GPU memory copy.
    • πŸ‘ GraalVM are fully supported on both of regular execution and native image for PyTorch engine. You can find more details on GraalVM example.
    • βž• Add a new set of Neural Network operators that offers capability of full controlling over parameters for CV domain, which is similar to PyTorch nn.functional module. You can find the operator method in its Block class.

      Conv2d.conv2d(NDArray input, NDArray weight, NDArray bias, Shape stride, Shape padding, Shape dilation, int groups);

    • πŸ“¦ Bill of Materials (BOM) is introduced to manage the version of dependencies for you. In DJL, the engine you are using usually is tied to a specific version of native package. By easily adding BOM dependencies like this, you won’t worry about version anymore.

      <dependency> <groupId>ai.djl</groupId> <artifactId>bom</artifactId> <version>0.7.0</version> <type>pom</type> <scope>import</scope> </dependency>

      implementation platform("ai.djl:bom:0.7.0")

    • πŸ‘ JDK 14 now get supported

    • πŸ†• New Reinforcement Learning interface including RIAgent, RlEnv, etc, you can see a comprehensive TicTacToe example.

    • πŸ‘Œ Support DJL Serving module. With only a single command, now you can enjoy deploying your model without bothering writing the server code or config like server proxy.

      cd serving && ./gradlew run --args="-m"

    πŸ“š Documentation and examples

    • We wrote the D2L book from chapter 1 to chapter 7 with DJL. You can learn basic deep learning concept and classic CV model architecture with DJL. Repo
    • πŸ“„ We launched a new doc website that hosts abundant documents and tutorials for quick search and copy-paste.
    • New Online Sentiment Analysis with Apache Flink.
    • New CTR prediction using Apache Beam and Deep Java Library(DJL).
    • πŸ†• New DJL logging configuration document which includes how to enable slf4j, switch to other logging libraries and adjust log level to debug the DJL.
    • πŸ†• New Dependency Management document that lists DJL internal and external dependencies along with their versions.
    • πŸ†• New CV Utilities document as a tutorial for Image API.
    • πŸ†• New Cache Management document is updated with more detail on different categories.dependency management.
    • ⚑️ Update Model Loading document to describe loading model from various sources like s3, hdfs.

    ✨ Enhancement

    • βž• Add archive file support to SimpleRepository
    • πŸ‘ ImageFolder supports nested folder
    • βž• Add singleton method for LambdaBlock to avoid redundant function reference
    • βž• Add Constant Initializer
    • βž• Add RMSProp, Adagrad, Adadelta Optimizer for MXNet engine
    • βž• Add new tabular dataset: Airfoil Dataset
    • βž• Add new basic dataset: CookingExchange, BananaDetection
    • βž• Add new NumPy like operators: full, sign
    • πŸ‘‰ Make prepare() method in Dataset optional
    • βž• Add new Image augmentation APIs where you can add to Pipeline to enrich your image dataset
    • βž• Add new handy fromNDArray to Image API for converting NDArray to Image object quickly
    • βž• Add interpolation option for Image Resize operator
    • πŸ‘Œ Support archive file for s3 repository
    • Import new SSD model from TensorFlow Hub into DJL model zoo
    • Import new Sentiment Analysis model from HuggingFace into DJL model zoo

    πŸ’₯ Breaking changes

    • ⬇️ Drop CUDA 9.2 support for all the platforms including linux, windows
    • βœ… The arguments of several blocks are changed to align with the signature of other widely used Deep Learning frameworks, please refer to our Java doc site
    • FastText is no longer a full Engine, it becomes a part of NLP utilities in favor of FastTextWorkEmbedding
    • 🚚 Move the WarmUp out from existing Tracking and introduce new WarmUpTracker
    • 0️⃣ MxPredictor now doesn’t copy parameters by default, please make sure to use NaiveEngine when you run inference in multi-threading environment

    πŸ› Bug Fixes

    • πŸ›  Fixing Validation Epoch Result bug
    • πŸ›  Fix multiple process downloading the same model bug
    • πŸ›  Fix potential concurrent write bug while downloading metadata.json
    • πŸ›  Fix URI parsing error on Windows
    • πŸ›  Fix multi-gpu training crash when the number of the batch size is smaller than number of devices
    • πŸ›  Fix not setting number of inter-op threads for PyTorch engine
  • v0.6.0 Changes

    June 25, 2020

    πŸ‘ DJL 0.6.0 brings stable Android support, ONNX Runtime experimental inference support, experimental training support for PyTorch.

    Key Features

    • πŸ‘ Stable Android inference support for PyTorch models
      • Provide abstraction for Image processing using ImageFactory
    • πŸ‘ Experimental support for inference on ONNX models
    • πŸŽ‰ Initial experimental training and imperative inference support for PyTorch engine
    • πŸ‘ Experimental support for using multi-engine
    • πŸ‘Œ Improved usage for NDIndex - support for ellipsis notation, arguments
    • πŸ‘Œ Improvements to AbstractBlock to simplify custom block creation
    • βž• Added new datasets

    πŸ“š Documentation and examples

    πŸ’₯ Breaking changes

    • πŸ”§ ModelZoo Configuration changes
    • ImageFactory changes
    • βœ… Please refer to javadocs for minor API changes

    Known issues

    • Issue with training with MXNet in multi-gpu instances
  • v0.5.0 Changes

    May 12, 2020

    πŸš€ DJL 0.5.0 release brings TensorFlow engine inference, initial NLP support and experimental Android inference with PyTorch engine.

    Key Features

    • πŸ‘ TensorFlow engine support with TensorFlow 2.1.0
      • Support NDArray operations, TensorFlow model zoo, multi-threaded inference
    • PyTorch engine improvement with PyTorch 1.5.0
    • πŸ‘ Experimental Android Support with PyTorch engine
    • MXNet engine improvement with MXNet 1.7.0
    • πŸŽ‰ Initial NLP support with MXNet engine
      • Training LSTM models
      • Support various text/word embedding, Seq2Seq use cases
      • Added NLP datasets
    • πŸ†• New AWS-AI toolkit to integrate with AWS technologies
      • Load model from s3 buckets directly
    • πŸ‘Œ Improved model-zoo with more models

    πŸ“š Documentation and examples

    πŸ’₯ Breaking changes

    • 🚚 We moved our repository module under api module. There will be no 0.5.0 version for ai.djl.repository, use ai.djl.api instead.
    • βœ… Please refer to DJL Java Doc for some minor API changes.

    Know issues:

  • v0.4.1 Changes

    April 06, 2020

    πŸš€ DJL 0.4.1 release includes an important performance Improvement on MXNet engine:

    🐎 Performance Improvement:

    • Cached MXNet features. This will avoid MxNDManager.newSubManager() to repeatedly calling getFeature() which will make JNA calls to native code.

    Known Issues:

    πŸš€ Same as v0.4.0 release:

    • πŸš€ PyTorch engine doesn't fully support multithreaded inference. You may see random crashes. Single-threaded inference is not impacted. We expect to fix this issue in a future release.
    • πŸ’» We saw random crash on mac for β€œtransfer Learning on CIFAR-10 Dataset” example on Jupyter Notebook. Command line all works.
  • v0.4.0 Changes

    March 30, 2020

    πŸ‘ DJL 0.4.0 brings PyTorch and TensorFlow 2.0 inference support. Now you can use these engines directly from DJL with minimum code changes.

    πŸš€ Note: TensorFlow 2.0 currently is in PoC stage, users will have to build from source to use it. We expect TF Engine finish in the future releases.

    Key Features

    • Training improvement
      • Add InputStreamTranslator
    • Model Zoo improvement
      • Add LocalZooProvider
      • Add ListModels API
    • πŸ‘ PyTorch Engine support
      • Use the new ai.djl.pytorch:pytorch-native-auto dependency for automatic engine selection and a simpler build/installation process
      • 60+ methods supported
    • πŸ‘ PyTorch ModelZoo support
      • Image Classification models: ResNet18 and ResNet50
      • Object Detection model: SSD_ResNet50
    • πŸ‘ TensorFlow 2.0 Engine support
      • Support on Eager Execution for imperative mode
      • 30+ methods support
    • πŸ‘ TensorFlow ModelZoo support
      • Image Classification models: ResNet50, MobileNetV2

    πŸ’₯ Breaking Changes

    ⚑️ There are a few changes in API and ModelZoo packages to adapt to multi-engine support. Please follow our latest examples to update your code base from 0.3.0 to 0.4.0.

    Known Issues

    πŸš€ 1. PyTorch engine doesn't fully support multithreaded inference. You may see random crashes. Single-threaded inference is not impacted. We expect to fix this issue in a future release. πŸ’» 2. We saw random crash on mac for β€œtransfer Learning on CIFAR-10 Dataset” example on Jupyter Notebook. Command line all works.

  • v0.3.0 Changes

    February 24, 2020

    πŸš€ This is the v0.3.0 release of DJL

    Key Features

    • πŸ— Use the new ai.djl.mxnet:mxnet-native-auto dependency for automatic engine selection and a simpler build/installation process
    • πŸ†• New Jupyter Notebook based tutorial for DJL
    • πŸ†• New Engine Support for:
      • FastText Engine
      • Started implementation on a PyTorch Engine
    • Simplified training experience featuring:
      • TrainingListeners to easily provide full featured training
      • DefaultTrainingConfig now contains a default optimizer and initializer
      • Easier to transfer from examples to your own code
    • πŸ‘€ Specify the random seed for reproducible training
    • 0️⃣ Run with multiple engines and specify the default using the "DJL_DEFAULT_ENGINE" environment variable or "ai.djl.default_engine" system property
    • ⚑️ Updated ModelZoo design to support unified loading with Criteria
    • Simple random Hyperparameter optimization

    πŸ’₯ Breaking Changes

    πŸš€ DJL is working to further improve the ease of use and correctness of our API. To that end, we have made a number of breaking changes for this release. Here are a few of the areas that had breaking changes:

    • πŸ“‡ Renamed TrainingMetrics to Evaluator
    • CompositeLoss replaced with AbstractCompositeLoss and SimpleCompositeLoss
    • Modified MLP class
    • βœ‚ Remove Matrix class
    • ⚑️ Updates to NDArray class

    Known Issues

    🏁 1. RNN operators do not working with GPU on Windows. 🏁 2. Only CUDA_ARCH 37, 70 are supported for Windows GPU machine.

  • v0.2.1 Changes

    December 18, 2019

    πŸš€ This is the v0.2.1 release of DJL

    Key Features

    🏁 1. Added support for Windows 10. 🐧 2. CUDA 9.2 support for all supported Operating systems (Linux, Windows)

    Known Issues

    🏁 1. RNN operators do not working with GPU on Windows. 🏁 2. Only CUDA_ARCH 37, 70 are supported for Windows GPU machine.

  • v0.2.0 Changes

    November 29, 2019

    πŸš€ This is the v0.2.0 release for DJL
    Key Features

    1. Deep learning engine agnostic High-level API for Training and Prediction.
    2. Dataset API, to create objects based out of different dataset formats, that work with training batchifier.
    3. MXNet-Model-Zoo with pre-trained models for Image Classification, Object Detection, Segmentation and more. πŸ“„ 4. Jupyter Notebooks and Examples to help get started with Training and predicting models with DJL

    πŸ‘ Engines currently supported

    1.Apache MXNet


    The javadocs are available here

  • v0.1.0

    November 01, 2019