Skip to content

Conversation

titaiwangms
Copy link
Contributor

node-level shape inference covers the forward shape inference, and relying on the logic of constant-folding, we only need _merge_shapes in identity op to have backward shape inference.

Copy link

codecov bot commented Oct 10, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 70.37%. Comparing base (32a61f4) to head (4324ff1).
⚠️ Report is 4 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #2623      +/-   ##
==========================================
- Coverage   70.39%   70.37%   -0.02%     
==========================================
  Files         222      222              
  Lines       26275    26288      +13     
  Branches     2629     2629              
==========================================
+ Hits        18496    18501       +5     
- Misses       6859     6866       +7     
- Partials      920      921       +1     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@justinchuby
Copy link
Collaborator

justinchuby commented Oct 10, 2025

Looking at the pr description, it doesn’t sound right: we get more complete shape information from PyTorch sometimes: when there is symbolic shapes onnx will yield None whereas PyTorch will yield a named shape?

So it would not be correct to always to override?

@titaiwangms
Copy link
Contributor Author

Looking at the pr description, it doesn’t sound right: we get more complete shape information from PyTorch sometimes: when there is symbolic shapes onnx will yield None whereas PyTorch will yield a named shape?

So it would not be correct to always to override?

To this PR, it's only renaming the function, and removing the redundant backward shape inference in cast. But _merge_shapes does in favor of static shapes though, @gramalingam is there a risk in node-level shape inference we override torch dynamic dims?

@justinchuby
Copy link
Collaborator

I see. Is it because cast only applies when the input is known and can be constant folded?

Is there a risk in node-level shape inference we override torch dynamic dims

I think that's ok. When the dim is specialized it should be because the shape inferencing engine is certain about it I think

Copy link
Collaborator

@justinchuby justinchuby left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. @gramalingam for a second review

@titaiwangms titaiwangms enabled auto-merge (squash) October 14, 2025 16:37
@titaiwangms titaiwangms merged commit 811937c into microsoft:main Oct 14, 2025
32 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request

Projects

Development

Successfully merging this pull request may close these issues.

3 participants