-
Notifications
You must be signed in to change notification settings - Fork 610
Add Op for torch.aten.unfold
#3772
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Receiving error related to `torch_static_info_cast` causing the test to fail:
```torch_mlir.compiler_utils.TorchMlirCompilerError: Lowering TorchScript IR -> Torch Backend IR failed with the following diagnostics:
error: unsupported by backend contract: tensor with unknown rank
note: see current operation: %2 = "torch.tensor_static_info_cast"(%arg0) : (!torch.vtensor<[?,?],f32>) -> !torch.vtensor
note: this is likely due to a missing transfer function in abstract_interp_lib_gen.py```
Move unfold from Linear.cpp -> DataMovement.cpp, Add `AtenUnfoldOp` to isViewLikeOp in Utils.cpp, Extend test cases for shape function, Move test from slice_like.py to reshape_like.py, Add test case for larger ranked tensor and negative indexing `(PASSING)`, Add test case for dynamically sized tensor `(FAILING)`
Add support for rank-zero tensor
|
Looks good to me! Fix the CI? |
CI failure, included for your convenience``` ****** Failed tests - 3 tests FAIL - "Unfold_Module_Dynamic_basic" Compilation error: Traceback (most recent call last): File "/_work/torch-mlir/torch-mlir/build/tools/torch-mlir/python_packages/torch_mlir/torch_mlir_e2e_test/framework.py", line 332, in compile_and_run_test compiled = config.compile(test.program_factory(), verbose=verbose) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/_work/torch-mlir/torch-mlir/build/tools/torch-mlir/python_packages/torch_mlir/torch_mlir_e2e_test/configs/onnx_backend.py", line 153, in compile onnx_module = convert_onnx(program, example_args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/_work/torch-mlir/torch-mlir/build/tools/torch-mlir/python_packages/torch_mlir/torch_mlir_e2e_test/configs/onnx_backend.py", line 81, in convert_onnx torch.onnx.export( File "/opt/python/cp311-cp311/lib/python3.11/site-packages/torch/onnx/utils.py", line 551, in export _export( File "/opt/python/cp311-cp311/lib/python3.11/site-packages/torch/onnx/utils.py", line 1648, in _export graph, params_dict, torch_out = _model_to_graph( ^^^^^^^^^^^^^^^^ File "/opt/python/cp311-cp311/lib/python3.11/site-packages/torch/onnx/utils.py", line 1170, in _model_to_graph graph, params, torch_out, module = _create_jit_graph(model, args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/python/cp311-cp311/lib/python3.11/site-packages/torch/onnx/utils.py", line 1046, in _create_jit_graph graph, torch_out = _trace_and_get_graph_from_model(model, args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/python/cp311-cp311/lib/python3.11/site-packages/torch/onnx/utils.py", line 950, in _trace_and_get_graph_from_model trace_graph, torch_out, inputs_states = torch.jit._get_trace_graph( ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/python/cp311-cp311/lib/python3.11/site-packages/torch/jit/_trace.py", line 1497, in _get_trace_graph outs = ONNXTracedModule( ^^^^^^^^^^^^^^^^^ File "/opt/python/cp311-cp311/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/python/cp311-cp311/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/python/cp311-cp311/lib/python3.11/site-packages/torch/jit/_trace.py", line 141, in forward graph, out = torch._C._create_graph_by_tracing( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/python/cp311-cp311/lib/python3.11/site-packages/torch/jit/_trace.py", line 132, in wrapper outs.append(self.inner(*trace_inputs)) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/python/cp311-cp311/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/python/cp311-cp311/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/python/cp311-cp311/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1543, in _slow_forward result = self.forward(*input, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/_work/torch-mlir/torch-mlir/build/tools/torch-mlir/python_packages/torch_mlir/torch_mlir_e2e_test/test_suite/reshape_like.py", line 1725, in forward return x.unfold(1, 2, 1) ^^^^^^^^^^^^^^^^^ RuntimeError: maximum size for tensor at dimension 1 is 1 but size is 2Summary: |
|
(from a quick glance this is probably more an issue with onnx than us. Probably just xfail the onnx related failures?) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This approach looks good. Let's try to figure out what is going on with the onnx e2e test path.
projects/pt1/python/torch_mlir/jit_ir_importer/build_tools/abstract_interp_lib_gen.py
Show resolved
Hide resolved
Yeah, I'm looking through them and I think it's onnx related. For example, one of the tests fail when using a negative dimension in GH action, but if I change it locally to target the last dimension with a positive int and re-run with |
Clean up `DataMovement.cpp` and handle rank = 0 with size = 0 case, Add test for rank = 0 with size = 0, Add shape tests for Rank-Zero case
Verified it's an issue with onnx and added related failures to |
Description
Implementation of the op for
torch.aten.unfold: TorchToLinalg Op Support #347Documentation of op can be found here: PyTorch Docs
For this op, we apply a sliding window of some
sizealong a singledimension, withstepin between iterations.Declaration: aten::unfold(Tensor(a) self, int dimension, int size, int step) -> Tensor(a)The resulting
unfoldedtensor modifies the shape ofdimensionto be equal to the number of blocks that the sliding windows extracts/inserts, with an additional dimension ofsizeappended (the number of cols of the output tensor directly translates from the size of the sliding window).So if we had a tensor of rank 3 (A x B x C), with dimension = 1, size = 2 and step = 2:
After extracting the window from the input tensor, we insert the (1 x size) slice into the output tensor. We can make this simpler by mapping the output indices from the input indices, like they do in the official implementation:
PyTorch Code