Skip to content

Commit d683604

Browse files
committed
21.05 Release (onnx#688)
Signed-off-by: Kevin Chen <kevinch@nvidia.com>
1 parent 41d4883 commit d683604

File tree

5 files changed

+29
-10
lines changed

5 files changed

+29
-10
lines changed

README.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ For press and other inquiries, please contact Hector Marinez at hmarinez@nvidia.
1616

1717
## Supported TensorRT Versions
1818

19-
Development on the Master branch is for the latest version of [TensorRT 7.2.2](https://developer.nvidia.com/nvidia-tensorrt-download) with full-dimensions and dynamic shape support.
19+
Development on the Master branch is for the latest version of [TensorRT 7.2.3.4](https://developer.nvidia.com/nvidia-tensorrt-download) with full-dimensions and dynamic shape support.
2020

2121
For previous versions of TensorRT, refer to their respective branches.
2222

@@ -48,8 +48,8 @@ Current supported ONNX operators are found in the [operator support matrix](docs
4848
### Dependencies
4949

5050
- [Protobuf >= 3.0.x](https://github.com/google/protobuf/releases)
51-
- [TensorRT 7.2.2](https://developer.nvidia.com/tensorrt)
52-
- [TensorRT 7.2.2 open source libaries (master branch)](https://github.com/NVIDIA/TensorRT/)
51+
- [TensorRT 7.2.3.4](https://developer.nvidia.com/tensorrt)
52+
- [TensorRT 7.2.3.4 open source libaries (master branch)](https://github.com/NVIDIA/TensorRT/)
5353

5454
### Building
5555

@@ -94,7 +94,7 @@ Python bindings for the ONNX-TensorRT parser are packaged in the shipped `.whl`
9494

9595
python3 -m pip install <tensorrt_install_dir>/python/tensorrt-7.x.x.x-cp<python_ver>-none-linux_x86_64.whl
9696

97-
TensorRT 7.2.2 supports ONNX release 1.6.0. Install it with:
97+
TensorRT 7.2.3.4 supports ONNX release 1.6.0. Install it with:
9898

9999
python3 -m pip install onnx==1.6.0
100100

builtin_op_importers.cpp

Lines changed: 7 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1922,23 +1922,27 @@ DEFINE_BUILTIN_OP_IMPORTER(InstanceNormalization)
19221922
ASSERT(inputs.at(2).is_weights(), ErrorCode::kUNSUPPORTED_NODE);
19231923
nvinfer1::ITensor* tensorPtr = &convertToTensor(inputs.at(0), ctx);
19241924
int nbDims = tensorPtr->getDimensions().nbDims;
1925-
ASSERT(nbDims >= 3 && nbDims <= 4 && "TensorRT only supports InstanceNormalization on 3D or 4D tensors!",
1925+
ASSERT(nbDims >= 3 && nbDims <= 5 && "TensorRT only supports InstanceNormalization on 3D, 4D, or 5D tensors!",
19261926
ErrorCode::kUNSUPPORTED_NODE);
19271927
auto scale_weights = inputs.at(1).weights();
19281928
auto bias_weights = inputs.at(2).weights();
19291929
OnnxAttrs attrs(node, ctx);
19301930
float epsilon = attrs.get("epsilon", 1e-5f);
1931-
1931+
const int32_t relu {0}; // the ONNX instance norm op does not use the relu parameter
1932+
const float alpha {0.f}; // the ONNX instance norm op does not use the alpha parameter
1933+
19321934
// Populate instanceNormalization plugin properties.
19331935
const std::string pluginName = "InstanceNormalization_TRT";
19341936
const std::string pluginVersion = "1";
19351937
std::vector<nvinfer1::PluginField> f;
19361938
f.emplace_back("epsilon", &epsilon, nvinfer1::PluginFieldType::kFLOAT32, 1);
19371939
f.emplace_back("scales", scale_weights.values, nvinfer1::PluginFieldType::kFLOAT32, scale_weights.count());
19381940
f.emplace_back("bias", bias_weights.values, nvinfer1::PluginFieldType::kFLOAT32, bias_weights.count());
1941+
f.emplace_back("relu", &relu, nvinfer1::PluginFieldType::kINT32, 1);
1942+
f.emplace_back("alpha", &alpha, nvinfer1::PluginFieldType::kFLOAT32, 1);
19391943

19401944
// Create plugin from registry
1941-
nvinfer1::IPluginV2* plugin = createPlugin(node.name(), importPluginCreator(pluginName, pluginVersion), f);
1945+
const auto plugin = createPlugin(node.name(), importPluginCreator(pluginName, pluginVersion), f);
19421946

19431947
ASSERT(plugin != nullptr && "InstanceNormalization plugin was not found in the plugin registry!",
19441948
ErrorCode::kUNSUPPORTED_NODE);

docs/Changelog.md

Lines changed: 16 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,21 @@
22

33
# ONNX-TensorRT Changelog
44

5+
## 21.05 Container Release - 2021-05-19
6+
### Added
7+
- Added support for InstanceNormalization on 5D tensors
8+
- Added library only build target [#659](https://github.com/onnx/onnx-tensorrt/pull/659)
9+
- Added support for negative gather indices [#681](https://github.com/onnx/onnx-tensorrt/pull/681)
10+
- Added support for `DOUBLE`-typed inputs and weights through downcast to float [#674](https://github.com/onnx/onnx-tensorrt/pull/674)
11+
- Added support for optional plugin fields in FallbackPlugin path [#676](https://github.com/onnx/onnx-tensorrt/pull/676)
12+
13+
### Updated
14+
- Updated license [#657](https://github.com/onnx/onnx-tensorrt/pull/657)
15+
16+
### Fixes
17+
- Fixed index offset calculation in GatherElements [#675](https://github.com/onnx/onnx-tensorrt/pull/675)
18+
- Clarified dynamic shape support for ReverseSequence
19+
520
## 21.03 Container Release - 2021-03-09
621
### Added
722
- Added opset13 support for `SoftMax`, `LogSoftmax`, `Squeeze`, and `Unsqueeze`
@@ -13,7 +28,7 @@
1328

1429
## 21.02 Container Release - 2021-01-18
1530
### Added
16-
- Added support for the `ReverseSequence` operator [#590] - https://github.com/onnx/onnx-tensorrt/pull/590
31+
- Added support for the `ReverseSequence` operator [#590](https://github.com/onnx/onnx-tensorrt/pull/590)
1732
- Updated `parse()` and `supportsModel()` API calls with an optional `model_path` parameter to support models with external weights [#621](https://github.com/onnx/onnx-tensorrt/pull/621)
1833
- Added support for the `Celu` operator
1934
- Added support for the `CumSum` operator

docs/operators.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@
44

55
TensorRT 7.2 supports operators up to Opset 13. Latest information of ONNX operators can be found [here](https://github.com/onnx/onnx/blob/master/docs/Operators.md)
66

7-
TensorRT supports the following ONNX data types: FLOAT32, FLOAT16, INT8, and BOOL
7+
TensorRT supports the following ONNX data types: DOUBLE, FLOAT32, FLOAT16, INT8, and BOOL
88

99
> Note: There is limited support for INT32, INT64, and DOUBLE types. TensorRT will attempt to cast down INT64 to INT32 and DOUBLE down to FLOAT where possible. If not possible, TensorRT will throw an error. See the [TensorRT layer support matrix](https://docs.nvidia.com/deeplearning/sdk/tensorrt-support-matrix/index.html#layers-precision-matrix) for more information on data type support.
1010

third_party/onnx

Submodule onnx updated 1813 files

0 commit comments

Comments
 (0)