You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* Updating linear resize dimensions check (onnx#362)
* Updates global pooling functions to work correctly with dynamic shapes (onnx#365)
* Support empty initializers for optional inputs (onnx#366)
* Add support for empty initializers for optional inputs
* Alphabetize importPluginFactory
* Support ceiling mode padding for dynamic inputs (onnx#368)
* Register empty Constant node outputs to support empty weights (onnx#369)
* Update myelin library name on Windows (onnx#371)
* Update logic to import ONNX initializers (onnx#375)
* Adding more type checks (onnx#380)
* Add type check for gather and shapedweights attribute imports (onnx#384)
* Throw warning if seed input is provided for randomuniform nodes (onnx#386)
* Update spacetodepth importer to support fulldims and dynamic shapes (onnx#392)
* Add check to avoid console spam of warnings (onnx#402)
* fix some build warnings/errors on Windows VS2019 (onnx#403)
* remove c++11/14 non-compliant constexpr lambdas
* fix build warnings on VS2019
* disable shape input tensor
* Revert "disable shape input tensor"
This reverts commit 9a49e03.
* Support opset11 padding (onnx#408)
* Fix loop importer scan output calculation (onnx#412)
* Fix typo in operators.md supported onnx operators (onnx#399)
There are two overlapping RNN operators, one supporting and the other not. Since onnx supports RNN, the one with supported N should be removed.
Signed-off-by: juhyung <sonju0427@gmail.com>
* Added optimization only mode which runs optimization passes on the model without converting it to tensorrt. (onnx#420)
* New command line options.
* Updated documentation.
* Currently requires linking against onnx project.
* Support opset8 scan (onnx#433)
* Fix deconv importer and remote instancenormalization epsilon clamp value (onnx#434)
* Fix deconv importer and remote instancenormalization epsilon clamp value
* Remove dilations
* Add check for shape tensor outputs (onnx#437)
* Fix slice caclulation for -INT_MAX (onnx#438)
* Support boolean weight conversion to tensors (onnx#439)
* Fix node output accesser for older versions of protobuf (onnx#441)
* Add const qualifier to isNullTensor() (onnx#446)
* Support negative slicing across an entire axis (onnx#453)
* Keep track of Loop tensor mappings (onnx#454)
* Fix fp16 weight import (onnx#484)
* Fix GEMM import assertion (onnx#485)
Co-authored-by: pranavm-nvidia <49246958+pranavm-nvidia@users.noreply.github.com>
Co-authored-by: George Wu <jywu@microsoft.com>
Co-authored-by: JuHyung Son <sonju0427@gmail.com>
Co-authored-by: Dennis Sandler <sandler.denis@gmail.com>
Copy file name to clipboardExpand all lines: ImporterContext.hpp
+10Lines changed: 10 additions & 0 deletions
Original file line number
Diff line number
Diff line change
@@ -48,6 +48,8 @@ class ImporterContext final : public IImporterContext
48
48
mTensorNameCounts; // Keep track of how many times a tensor name shows up, to avoid duplicate naming in TRT.
49
49
StringMap<size_t>
50
50
mLayerNameCounts; // Keep track of how many times a tensor name shows up, to avoid duplicate naming in TRT.
51
+
std::unordered_set<std::string> mUnsupportedShapeTensors; // Container to hold any shape tensors that are the output of layers that do not support shape tensors.
52
+
StringMap<std::string> mLoopTensors; // Container to map subgraph tensors to their original outer graph names.
Copy file name to clipboardExpand all lines: README.md
+9Lines changed: 9 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -58,6 +58,15 @@ ONNX models can also be converted to human-readable text:
58
58
59
59
onnx2trt my_model.onnx -t my_model.onnx.txt
60
60
61
+
ONNX models can also be optimized by ONNX's optimization libraries.
62
+
To optimize an ONNX model and output a new one use `-m` to specify the output model name and `-O` to specify a semicolon-separated list of optimization passes to apply:
0 commit comments