Add int8 support to ConvInteger #26585
Draft
+580
−138
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Description
This change extends the
ConvIntegerimplementation to match the ONNX operator spec, which allows bothint8anduint8for the input tensors:ConvIntegerschema defines:T1:tensor(int8)ortensor(uint8)T2:tensor(int8)ortensor(uint8)T3:tensor(int32)uint8×uint8combination was supported.uint8×uint8(existing behavior)uint8×int8int8×uint8int8×int8Motivation and Context
Fixes #24183
Fixes #15888
Fixes #12558
Fixes #3130
Fixes #12362
The ONNX ConvInteger operator schema allows both int8 and uint8 element types for its inputs, but the current implementation only supports uint8 × uint8. This leads to a gap where valid ONNX models using ConvInteger with int8 tensors cannot be executed.
This PR closes that gap by:
Aligning the implementation with the official ConvInteger type constraints.
Enabling models that use int8 (or mixed int8/uint8) for X and W to run without needing operator rewrites or additional custom kernels.
Keeping existing uint8 behavior unchanged, so the change is backwards compatible for current users.
Implementation details
The core logic of ConvInteger::Compute is moved into a templated helper:
XT is the element type of X (uint8_t or int8_t).
WT is the element type of W (uint8_t or int8_t).
Zero-point handling
Zero points are still treated as per-tensor scalar values, with the same validation,
The values are read via
DataRaw()and stored as 8-bit scalars, preserving the previous behavior.Interpretation of these raw bytes as signed or unsigned is delegated to the GEMM implementation via explicit signedness flags (see below).
Im2col templated on XT
The Im2col call now uses the runtime input type XT.
Quantized GEMM with signedness flags:
AIsSigned and BIsSigned are derived from the runtime types of W and X.
Data for A and B is passed as raw bytes, the GEMM implementation uses the signedness flags to interpret them correctly (In a manner similar to the implementation in
MatMulInteger).The public Compute method becomes a thin dispatcher that selects the appropriate ComputeInner<XT, WT> instantiation based on the actual input types.
In addition, a small set of unit tests is added on top of the existing ConvInteger tests to cover the new type combinations, including cases where the first input tensor contains negative values (for the int8 × int8 path).