Releases: oneapi-src/oneDNN
v1.1.3
v1.2-rc
This is a release candidate for DNNL v1.2. Please provide feedback and report bugs in Github issues.
v1.1.2
This is a patch release containing following changes to v1.1.1:
- Fixed threading over the spatial in bfloat16 batched normalization (017b6c9)
- Fixed read past end-of-buffer error for int8 convolution (7d6f45e)
- Fixed condition for dispatching optimized channel blocking in fp32 backward convolution on Intel Xeon Phi(TM) processor (846eba1)
- Fixed fp32 backward convolution for shapes with spatial strides over the depth dimension (002e3ab)
- Fixed softmax with zero sizes on GPU (936bff4)
- Fixed int8 deconvolution with dilation when ih <= dh (3e3bacb)
- Enabled back fp32 -> u8 reorder for RNN (a2c2507)
- Fixed segmentation fault in bfloat16 backward convolution from kd_padding=0 computation (52d476c)
- Fixed segmentation fault in bfloat16 forward convolution due to push/pop imbalance (4f6e3d5)
- Fixed library version for OS X build (0d85005)
- Fixed padding by channels in concat (a265c7d)
- Added full text of third party licenses and copyright notices to LICENSE file (79f204c)
- Added separate README for binary packages (28f4c96)
- Fixed computing per-oc mask in RNN (ff3ffab)
- Added workaround for number of cores calculation in Xbyak (301b088)
v2.0-beta03
This is a preview release for oneDNN v2.0. The release is based on oneDNN v1.1 and the release notes below include incremental changes.
Binary distribution of this software is available as Intel(R) oneAPI Deep Neural Network Library in Intel(R) oneAPI.
New functionality
- SYCL API extensions and interoperability with SYCL code
- Support for Intel DPC++ compiler and runtime
Usability
- SYCL interoperability examples
Known Limitations
- Some f32/f16 convolutions with non-square spatial shape of filters may produce incorrect results on GPU.
- Some bf16 backward convolutions with 3D spatial and negative padding may produce segfault on CPU.
- Non-Intel GPUs are not supported. The library API allows to create a DNNL engine by index (the order of devices is determined by the SYCL runtime), and there is no check for GPU devices being non-Intel. To have more control, users can create a DNNL engine passing SYCL device and context explicitly.
- RNN primitive may hang on GPU if the number of recurrent cells is bigger than 40.
- int8 RNN may produce incorrect results on GPU.
- Backward propagation of Layer Normalization primitive produces incorrect results.
- Intel Processor Graphics Gen11 is not supported.
- When running GPU kernels that take longer than a certain time (it depends on OS and system settings) you may face a situation resulting in apparent hang of the application. Configure driver to disable this timeout and avoid hanging of DPC++ or OpenCL programs, including DNNL examples.
On Linux:
$ sudo bash -c 'echo N > /sys/module/i915/parameters/enable_hangcheck'
On Windows increase TdrDelay and TdrDdiDelay values using registry.
v1.0.4
v1.1.1
This is a patch release containing following changes to v1.1:
- Fixed zero padding for memory formats with rank 3 and below (f97e174)
- Fixed 'deprecated std::copy' warning with Microsoft C++ Compiler (ee276af)
- Fixed tail scaling for int8 inner product (f2b68c7)
- Fixed correctness issue for int8 GEMM with
N=1
(0dd5c13) - Sum does not override the data type for destination memory descriptor when used with
any
(5301981) - Addressed following corner cases in CPU convolution implementation:
v1.0.3
This is a patch release containing following changes to v1.0.2:
- Fixed zero padding for memory formats with rank 3 and below (4d78aaf)
- Fixed tail scaling for int8 inner product (41b5a7e)
- Sum does not override the data type for destination memory descriptor when used with
any
(e979eda) - Improved s8s8 GEMM and inner product performance (4b44aa5)
- Reduced memory consumption of GEMM-based algorithm for convolution weight gradient (f46b044)
- Fixed negative padding processing in pooling (48ba96a)
- Addressed memory leak in GPU deconvolution (686fc41)
- Addressed memory leak in GPU stream (1206b2f)
- Fixed fp16 GEMM correctness on GPU (c2425d4)
- Fixed GEMM correctness on GPU for the case of small M dimension (ac2683f)
- Addressed following corner cases in CPU convolution implementation:
- Fixed tail processing in int8 depthwise convolution (3a0943b)
- Fixed bias padding in bfloat16 depthwise convolution (3d9af7c)
- Fixed correctness issue in s8s8 flavor of depthwise convolution (e4d9049)
- Fixed correctness issue in GEMM-based algorithm for 3D convolutions (161ac40)
- Fixed corner case issues in Intel AVX512 implementation of convolution weight gradient (68f5124)
- Disabled not supported cases for depthwise convolution weight gradient (5e6e6c8)
- Convolution with 1x1 filter returns
unimplemented
for cases that have padding in spatial dimensions (9d7cc77) - Fixed negative padding support in general convolution kernel (b1c602a)
- Fixed padding handling in depthwise convolution backpropagation (04712f6)
- Added support for negative padding in
h
andd
spatial dimensions (7ddce82) - Fixed segfault in strided convolution backpropagation (b04f3f5)
- Fixed memory corruption in convolution backpropagation (8877bc9)
v0.20.6
v0.21.2
This is a patch release containing following changes to v0.21.1:
- Fixed performance regression in GEMM (9534621)
- Fixed int8 dilated convolution for some shapes with input heights <= dilation over the heights dimension (e68f151)
- Addressed static initialization order issue in bf16 converters (ae8efde)
- Fixed fast reference backward convolution dispatching for 3D-spatial case (5994d63)
v1.1
Performance optimizations
- Improved functionality performance with TBB threading achieving comparable performance with OpenMP threading.
- Improved int8 and fp32 GEMM performance on system with Intel AVX-512 and Intel VNNI support.
- Improved softmax performance for NHWC and corresponding blocked layouts.
- Improved RNN cell performance and decreased dependency of RNN performance from the compiler vectorization capabilities.
- Improved reorders performance for some shapes.
New functionality
- Introduced layer normalization and binary elementwise primitives support (CPU engine).
- Introduced swish (CPU and GPU engines) and gelu (GPU engine) activation support in elementwise primitive.
- Introduced bfloat16 data type support in RNN cells (CPU engine).
- Introduced initial int8 and bfloat16 data types support for GPU functionality.
Usability improvements
- TBB threading support is promoted to production quality.
- Introduced support for memory format
any
for memory-bound primitives backpropagation. This mechanism allows to match gradient memory format with source and destination memory formats from forward pass. - Changed default compiler flags to target Intel SSE4.1 instruction set to make builds portable.
- (experimental) Introduced caching mechanism that reduces primitive creation time for repeated primitive creation. The functionality is disabled by default and has to be enabled in compile time.
Validation improvements
- Extended benchdnn to cover all supported primitives.
- Introduced robust validation method for RNN cells in benchdnn. The approach allows to replace activations with linear function to make error accumulation more predictable and decrease the number of false positives.
- Extended convolution test coverage.
Thanks to the contributors
This release contains contributions from many Intel Performance Libraries developers as well as Ilia Taraban, Jacek Czaja @jczaja, William Tambellini @WilliamTambellini, Tomasz Kalina, Mateusz Guziak, Daniel Haidachuk, Konstantin Basargin @basargin, Aaron Johnson @aaronjohnson, and Jeremy Wong @jrmwng. We would also like to thank everyone who asked questions and reported issues.