-
Notifications
You must be signed in to change notification settings - Fork 86
Merge shapes only in identity op and nodel-level shape inference #2623
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merge shapes only in identity op and nodel-level shape inference #2623
Conversation
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #2623 +/- ##
==========================================
- Coverage 70.39% 70.37% -0.02%
==========================================
Files 222 222
Lines 26275 26288 +13
Branches 2629 2629
==========================================
+ Hits 18496 18501 +5
- Misses 6859 6866 +7
- Partials 920 921 +1 ☔ View full report in Codecov by Sentry. |
Looking at the pr description, it doesn’t sound right: we get more complete shape information from PyTorch sometimes: when there is symbolic shapes onnx will yield None whereas PyTorch will yield a named shape? So it would not be correct to always to override? |
To this PR, it's only renaming the function, and removing the redundant backward shape inference in cast. But |
I see. Is it because cast only applies when the input is known and can be constant folded?
I think that's ok. When the dim is specialized it should be because the shape inferencing engine is certain about it I think |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. @gramalingam for a second review
node-level shape inference covers the forward shape inference, and relying on the logic of constant-folding, we only need
_merge_shapes
in identity op to have backward shape inference.