Deep Java Library (DJL) v0.7.0 Release NotesRelease Date: 2020-09-04 // almost 2 years ago
👍 DJL 0.7.0 brings SetencePiece for tokenization, GravalVM support for PyTorch engine, a new set of Nerual Network operators, BOM module, Reinforcement Learning interface and experimental DJL Serving module.
- Now you can leverage powerful SentencePiece to do text processing including tokenization, de-tokenization, encoding and decoding. You can find more details on extension/sentencepiece.
- ⬆️ Engine upgrade:
- MXNet engine: 1.7.0-backport
- PyTorch engine: 1.6.0
- TensorFlow: 2.3.0
- 0️⃣ MXNet multi-gpu training now is boosted by MXNet KVStore by default, which saves lots of overhead by GPU memory copy.
- 👍 GraalVM are fully supported on both of regular execution and native image for PyTorch engine. You can find more details on GraalVM example.
➕ Add a new set of Neural Network operators that offers capability of full controlling over parameters for CV domain, which is similar to PyTorch nn.functional module. You can find the operator method in its Block class.
Conv2d.conv2d(NDArray input, NDArray weight, NDArray bias, Shape stride, Shape padding, Shape dilation, int groups);
📦 Bill of Materials (BOM) is introduced to manage the version of dependencies for you. In DJL, the engine you are using usually is tied to a specific version of native package. By easily adding BOM dependencies like this, you won’t worry about version anymore.
<dependency> <groupId>ai.djl</groupId> <artifactId>bom</artifactId> <version>0.7.0</version> <type>pom</type> <scope>import</scope> </dependency>
👍 JDK 14 now get supported
👌 Support DJL Serving module. With only a single command, now you can enjoy deploying your model without bothering writing the server code or config like server proxy.
cd serving && ./gradlew run --args="-m https://djl-ai.s3.amazonaws.com/resources/test-models/mlp.tar.gz"
📚 Documentation and examples
- We wrote the D2L book from chapter 1 to chapter 7 with DJL. You can learn basic deep learning concept and classic CV model architecture with DJL. Repo
- 📄 We launched a new doc website that hosts abundant documents and tutorials for quick search and copy-paste.
- New Online Sentiment Analysis with Apache Flink.
- New CTR prediction using Apache Beam and Deep Java Library(DJL).
- 🆕 New DJL logging configuration document which includes how to enable slf4j, switch to other logging libraries and adjust log level to debug the DJL.
- 🆕 New Dependency Management document that lists DJL internal and external dependencies along with their versions.
- 🆕 New CV Utilities document as a tutorial for Image API.
- 🆕 New Cache Management document is updated with more detail on different categories.dependency management.
- ⚡️ Update Model Loading document to describe loading model from various sources like s3, hdfs.
- ➕ Add archive file support to SimpleRepository
- 👍 ImageFolder supports nested folder
- ➕ Add singleton method for LambdaBlock to avoid redundant function reference
- ➕ Add Constant Initializer
- ➕ Add RMSProp, Adagrad, Adadelta Optimizer for MXNet engine
- ➕ Add new tabular dataset: Airfoil Dataset
- ➕ Add new basic dataset: CookingExchange, BananaDetection
- ➕ Add new NumPy like operators: full, sign
- 👉 Make prepare() method in Dataset optional
- ➕ Add new Image augmentation APIs where you can add to Pipeline to enrich your image dataset
- ➕ Add new handy fromNDArray to Image API for converting NDArray to Image object quickly
- ➕ Add interpolation option for Image Resize operator
- 👌 Support archive file for s3 repository
- Import new SSD model from TensorFlow Hub into DJL model zoo
- Import new Sentiment Analysis model from HuggingFace into DJL model zoo
💥 Breaking changes
- ⬇️ Drop CUDA 9.2 support for all the platforms including linux, windows
- ✅ The arguments of several blocks are changed to align with the signature of other widely used Deep Learning frameworks, please refer to our Java doc site
- FastText is no longer a full Engine, it becomes a part of NLP utilities in favor of
- 🚚 Move the WarmUp out from existing Tracking and introduce new
MxPredictornow doesn’t copy parameters by default, please make sure to use
NaiveEnginewhen you run inference in multi-threading environment
🐛 Bug Fixes
- 🛠 Fixing Validation Epoch Result bug
- 🛠 Fix multiple process downloading the same model bug
- 🛠 Fix potential concurrent write bug while downloading metadata.json
- 🛠 Fix URI parsing error on Windows
- 🛠 Fix multi-gpu training crash when the number of the batch size is smaller than number of devices
- 🛠 Fix not setting number of inter-op threads for PyTorch engine