Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 4 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ For press and other inquiries, please contact Hector Marinez at hmarinez@nvidia.

## Supported TensorRT Versions

Development on the Master branch is for the latest version of [TensorRT 7.2.2](https://developer.nvidia.com/nvidia-tensorrt-download) with full-dimensions and dynamic shape support.
Development on the Master branch is for the latest version of [TensorRT 7.2.3.4](https://developer.nvidia.com/nvidia-tensorrt-download) with full-dimensions and dynamic shape support.

For previous versions of TensorRT, refer to their respective branches.

Expand Down Expand Up @@ -48,8 +48,8 @@ Current supported ONNX operators are found in the [operator support matrix](docs
### Dependencies

- [Protobuf >= 3.0.x](https://github.com/google/protobuf/releases)
- [TensorRT 7.2.2](https://developer.nvidia.com/tensorrt)
- [TensorRT 7.2.2 open source libaries (master branch)](https://github.com/NVIDIA/TensorRT/)
- [TensorRT 7.2.3.4](https://developer.nvidia.com/tensorrt)
- [TensorRT 7.2.3.4 open source libaries (master branch)](https://github.com/NVIDIA/TensorRT/)

### Building

Expand Down Expand Up @@ -94,7 +94,7 @@ Python bindings for the ONNX-TensorRT parser are packaged in the shipped `.whl`

python3 -m pip install <tensorrt_install_dir>/python/tensorrt-7.x.x.x-cp<python_ver>-none-linux_x86_64.whl

TensorRT 7.2.2 supports ONNX release 1.6.0. Install it with:
TensorRT 7.2.3.4 supports ONNX release 1.6.0. Install it with:

python3 -m pip install onnx==1.6.0

Expand Down
10 changes: 7 additions & 3 deletions builtin_op_importers.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -1922,23 +1922,27 @@ DEFINE_BUILTIN_OP_IMPORTER(InstanceNormalization)
ASSERT(inputs.at(2).is_weights(), ErrorCode::kUNSUPPORTED_NODE);
nvinfer1::ITensor* tensorPtr = &convertToTensor(inputs.at(0), ctx);
int nbDims = tensorPtr->getDimensions().nbDims;
ASSERT(nbDims >= 3 && nbDims <= 4 && "TensorRT only supports InstanceNormalization on 3D or 4D tensors!",
ASSERT(nbDims >= 3 && nbDims <= 5 && "TensorRT only supports InstanceNormalization on 3D, 4D, or 5D tensors!",
ErrorCode::kUNSUPPORTED_NODE);
auto scale_weights = inputs.at(1).weights();
auto bias_weights = inputs.at(2).weights();
OnnxAttrs attrs(node, ctx);
float epsilon = attrs.get("epsilon", 1e-5f);

const int32_t relu {0}; // the ONNX instance norm op does not use the relu parameter
const float alpha {0.f}; // the ONNX instance norm op does not use the alpha parameter

// Populate instanceNormalization plugin properties.
const std::string pluginName = "InstanceNormalization_TRT";
const std::string pluginVersion = "1";
std::vector<nvinfer1::PluginField> f;
f.emplace_back("epsilon", &epsilon, nvinfer1::PluginFieldType::kFLOAT32, 1);
f.emplace_back("scales", scale_weights.values, nvinfer1::PluginFieldType::kFLOAT32, scale_weights.count());
f.emplace_back("bias", bias_weights.values, nvinfer1::PluginFieldType::kFLOAT32, bias_weights.count());
f.emplace_back("relu", &relu, nvinfer1::PluginFieldType::kINT32, 1);
f.emplace_back("alpha", &alpha, nvinfer1::PluginFieldType::kFLOAT32, 1);

// Create plugin from registry
nvinfer1::IPluginV2* plugin = createPlugin(node.name(), importPluginCreator(pluginName, pluginVersion), f);
const auto plugin = createPlugin(node.name(), importPluginCreator(pluginName, pluginVersion), f);

ASSERT(plugin != nullptr && "InstanceNormalization plugin was not found in the plugin registry!",
ErrorCode::kUNSUPPORTED_NODE);
Expand Down
17 changes: 16 additions & 1 deletion docs/Changelog.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,21 @@

# ONNX-TensorRT Changelog

## 21.05 Container Release - 2021-05-19
### Added
- Added support for InstanceNormalization on 5D tensors
- Added library only build target [#659](https://github.com/onnx/onnx-tensorrt/pull/659)
- Added support for negative gather indices [#681](https://github.com/onnx/onnx-tensorrt/pull/681)
- Added support for `DOUBLE`-typed inputs and weights through downcast to float [#674](https://github.com/onnx/onnx-tensorrt/pull/674)
- Added support for optional plugin fields in FallbackPlugin path [#676](https://github.com/onnx/onnx-tensorrt/pull/676)

### Updated
- Updated license [#657](https://github.com/onnx/onnx-tensorrt/pull/657)

### Fixes
- Fixed index offset calculation in GatherElements [#675](https://github.com/onnx/onnx-tensorrt/pull/675)
- Clarified dynamic shape support for ReverseSequence

## 21.03 Container Release - 2021-03-09
### Added
- Added opset13 support for `SoftMax`, `LogSoftmax`, `Squeeze`, and `Unsqueeze`
Expand All @@ -13,7 +28,7 @@

## 21.02 Container Release - 2021-01-18
### Added
- Added support for the `ReverseSequence` operator [#590] - https://github.com/onnx/onnx-tensorrt/pull/590
- Added support for the `ReverseSequence` operator [#590](https://github.com/onnx/onnx-tensorrt/pull/590)
- Updated `parse()` and `supportsModel()` API calls with an optional `model_path` parameter to support models with external weights [#621](https://github.com/onnx/onnx-tensorrt/pull/621)
- Added support for the `Celu` operator
- Added support for the `CumSum` operator
Expand Down
2 changes: 1 addition & 1 deletion docs/operators.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@

TensorRT 7.2 supports operators up to Opset 13. Latest information of ONNX operators can be found [here](https://github.com/onnx/onnx/blob/master/docs/Operators.md)

TensorRT supports the following ONNX data types: FLOAT32, FLOAT16, INT8, and BOOL
TensorRT supports the following ONNX data types: DOUBLE, FLOAT32, FLOAT16, INT8, and BOOL

> Note: There is limited support for INT32, INT64, and DOUBLE types. TensorRT will attempt to cast down INT64 to INT32 and DOUBLE down to FLOAT where possible. If not possible, TensorRT will throw an error. See the [TensorRT layer support matrix](https://docs.nvidia.com/deeplearning/sdk/tensorrt-support-matrix/index.html#layers-precision-matrix) for more information on data type support.
Expand Down
2 changes: 1 addition & 1 deletion third_party/onnx
Submodule onnx updated 1813 files