New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[export] use tree_map for _flatten_dynamic_shapes #125415
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/125415
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit 5c9ddd1 with merge base b1b0399 (): This comment was automatically generated by Dr. CI and updates every 15 minutes. |
This pull request was exported from Phabricator. Differential Revision: D56894923 |
torch/_export/non_strict_utils.py
Outdated
flat_dynamic_shapes = _flatten_dynamic_shapes(combined_args, dynamic_shapes) | ||
|
||
# check number of shapes vs. number of inputs | ||
num_placeholders = len([node for node in gm.graph.nodes if node.op == "placeholder"]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
num_placeholders = len([node for node in gm.graph.nodes if node.op == "placeholder"]) | |
num_placeholders = [node.op == "placeholder" for node in gm.graph.nodes].count(True) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks!
This pull request was exported from Phabricator. Differential Revision: D56894923 |
Summary: Pull Request resolved: #125415 Fixing the implementation of `_flatten_dynamic_shapes()`, to follow how `_process_dynamic_shapes()` does it. The previous implementation would misinterpret some nested dynamic shapes specs, causing it to miss out on some shapes specs, for example with nested inputs/constant input tuples: ``` inputs = ( (2, 1), ( torch.randn(2, 1), torch.randn(2, 2), torch.randn(2, 3), ) ) dynamic_shapes = ( (None, None), ( None, None, None, ) ) ``` This would get interpreted as 2 shapes specs for 2d and 3d tensors. Fixing so this doesn't happen. Test Plan: Existing export tests Differential Revision: D56894923
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can we add a test to make sure this change fixes the original issue? or does adding the assertion help us check the existing tests?
Summary: Pull Request resolved: #125415 Fixing the implementation of `_flatten_dynamic_shapes()`, to follow how `_process_dynamic_shapes()` does it. The previous implementation would misinterpret some nested dynamic shapes specs, causing it to miss out on some shapes specs, for example with nested inputs/constant input tuples: ``` inputs = ( (2, 1), ( torch.randn(2, 1), torch.randn(2, 2), torch.randn(2, 3), ) ) dynamic_shapes = ( (None, None), ( None, None, None, ) ) ``` This would get interpreted as 2 shapes specs for 2d and 3d tensors. Fixing so this doesn't happen. Test Plan: Existing export tests Differential Revision: D56894923
This pull request was exported from Phabricator. Differential Revision: D56894923 |
Just added, thank you! |
@pytorchbot merge -f 'Landed internally' (Initiating merge automatically since Phabricator Diff has merged, using force because this PR might not pass merge_rules.json but landed internally) |
Merge startedYour change will be merged immediately since you used the force (-f) flag, bypassing any CI checks (ETA: 1-5 minutes). Please use Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Summary:
Fixing the implementation of
_flatten_dynamic_shapes()
, to follow how_process_dynamic_shapes()
does it. The previous implementation would misinterpret some nested dynamic shapes specs, causing it to miss out on some shapes specs, for example with nested inputs/constant input tuples:This would get interpreted as 2 shapes specs for 2d and 3d tensors. Fixing so this doesn't happen.
Test Plan: Existing export tests
Differential Revision: D56894923