Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 0 additions & 5 deletions .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -75,8 +75,3 @@ jobs:
- name: Integration tests (torch-${{ matrix.torch-version }})
run: |
bash build_tools/ci/test_posix.sh ${{ matrix.torch-version }}

- name: Check generated sources (torch-nightly only)
if: ${{ matrix.torch-version == 'nightly' }}
run: |
bash build_tools/ci/check_generated_sources.sh
2 changes: 1 addition & 1 deletion CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -74,7 +74,7 @@ endif()
# Turning this off disables the old TorchScript path, leaving FX based import as the current supported option.
# The option will be retained for a time, and if a maintainer is interested in setting up testing for it,
# please reach out on the list and speak up for it. It will only be enabled in CI for test usage.
cmake_dependent_option(TORCH_MLIR_ENABLE_JIT_IR_IMPORTER "Enables JIT IR Importer" ON TORCH_MLIR_ENABLE_PYTORCH_EXTENSIONS OFF)
cmake_dependent_option(TORCH_MLIR_ENABLE_JIT_IR_IMPORTER "Enables JIT IR Importer" OFF TORCH_MLIR_ENABLE_PYTORCH_EXTENSIONS OFF)
cmake_dependent_option(TORCH_MLIR_ENABLE_LTC "Enables LTC backend" OFF TORCH_MLIR_ENABLE_PYTORCH_EXTENSIONS OFF)

option(TORCH_MLIR_ENABLE_ONNX_C_IMPORTER "Enables the ONNX C importer" OFF)
Expand Down
2 changes: 1 addition & 1 deletion build_tools/ci/build_posix.sh
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ cmake -S "$repo_root/externals/llvm-project/llvm" -B "$build_dir" \
-DLLVM_TARGETS_TO_BUILD=host \
-DMLIR_ENABLE_BINDINGS_PYTHON=ON \
-DTORCH_MLIR_ENABLE_LTC=OFF \
-DTORCH_MLIR_ENABLE_PYTORCH_EXTENSIONS=ON
-DTORCH_MLIR_ENABLE_PYTORCH_EXTENSIONS=OFF
echo "::endgroup::"

echo "::group::Build"
Expand Down
2 changes: 1 addition & 1 deletion build_tools/ci/test_posix.sh
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ this_dir="$(cd $(dirname $0) && pwd)"
repo_root="$(cd $this_dir/../.. && pwd)"
torch_version="${1:-unknown}"

export PYTHONPATH="$repo_root/build/tools/torch-mlir/python_packages/torch_mlir:$repo_root/projects/pt1"
export PYTHONPATH="$repo_root/build/tools/torch-mlir/python_packages/torch_mlir:$repo_root/projects/e2e"

echo "::group::Run ONNX e2e integration tests"
python3 -m e2e_testing.main --config=onnx -v
Expand Down
2 changes: 1 addition & 1 deletion docs/adding_an_e2e_test.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
Adding support for a Torch operator in Torch-MLIR should always be accompanied
by at least one end-to-end test to make sure the implementation of the op
matches the behavior of PyTorch. The tests live in the
`torch-mlir/projects/pt1/python/torch_mlir_e2e_test/test_suite` directory. When adding a new
`torch-mlir/projects/e2e/torch_mlir_e2e_test/test_suite` directory. When adding a new
test, choose a file that best matches the op you're testing, and if there is no
file that best matches add a new file for your op.

Expand Down
16 changes: 8 additions & 8 deletions docs/development.md
Original file line number Diff line number Diff line change
Expand Up @@ -463,10 +463,10 @@ Torch-MLIR has two types of tests:
1. End-to-end execution tests. These compile and run a program and check the
result against the expected output from execution on native Torch. These use
a homegrown testing framework (see
`projects/pt1/python/torch_mlir_e2e_test/framework.py`) and the test suite
lives at `projects/pt1/python/torch_mlir_e2e_test/test_suite/__init__.py`.
The tests require to build with `TORCH_MLIR_ENABLE_PYTORCH_EXTENSIONS` (and
the dependent option `TORCH_MLIR_ENABLE_JIT_IR_IMPORTER`) set to `ON`.
`projects/e2e/torch_mlir_e2e_test/framework.py`) and the test suite
lives at `projects/e2e/torch_mlir_e2e_test/test_suite/__init__.py`.
Some old configs require building with `TORCH_MLIR_ENABLE_PYTORCH_EXTENSIONS`
(and the dependent option `TORCH_MLIR_ENABLE_JIT_IR_IMPORTER`) set to `ON`.

2. Compiler and Python API unit tests. These use LLVM's `lit` testing framework.
For example, these might involve using `torch-mlir-opt` to run a pass and
Expand All @@ -482,7 +482,7 @@ Torch-MLIR has two types of tests:
> An `.env` file must be generated via `build_tools/write_env_file.sh` before these commands can be run.


The following assumes you are in the `projects/pt1` directory:
The following assumes you are in the `projects/e2e` directory:

```shell
# Run all tests on the reference backend
Expand All @@ -496,7 +496,7 @@ The following assumes you are in the `projects/pt1` directory:
Alternatively, you can run the tests via Python directly:

```shell
cd projects/pt1
cd projects/e2e
python -m e2e_testing.main -f 'AtenEmbeddingBag'
```

Expand Down Expand Up @@ -621,10 +621,10 @@ Here are some examples of PRs updating the LLVM and MLIR-HLO submodules:

To enable ASAN, pass `-DLLVM_USE_SANITIZER=Address` to CMake. This should "just
work" with all C++ tools like `torch-mlir-opt`. When running a Python script
such as through `./projects/pt1/tools/e2e_test.sh`, you will need to do:
such as through `./projects/e2e/tools/e2e_test.sh`, you will need to do:

```
LD_PRELOAD="$(clang -print-file-name=libclang_rt.asan-x86_64.so)" ./projects/pt1/tools/e2e_test.sh -s
LD_PRELOAD="$(clang -print-file-name=libclang_rt.asan-x86_64.so)" ./projects/e2e/tools/e2e_test.sh -s
# See instructions here for how to get the libasan path for GCC:
# https://stackoverflow.com/questions/48833176/get-location-of-libasan-from-gcc-clang
```
Expand Down
5 changes: 5 additions & 0 deletions projects/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -64,6 +64,11 @@ if(TORCH_MLIR_ENABLE_JIT_IR_IMPORTER OR TORCH_MLIR_ENABLE_LTC)
message(STATUS "TORCH_LIBRARIES = ${TORCH_LIBRARIES}")
endif()

# Include e2e testing infra.
if(NOT TORCH_MLIR_ENABLE_ONLY_MLIR_PYTHON_BINDINGS)
add_subdirectory(e2e)
endif()

# Include jit_ir_common if the jit_ir importer or LTC is enabled,
# since they both require it.
if(TORCH_MLIR_ENABLE_JIT_IR_IMPORTER OR TORCH_MLIR_ENABLE_LTC)
Expand Down
7 changes: 7 additions & 0 deletions projects/e2e/CMakeLists.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
message(STATUS "Building end-to-end testing package.")

################################################################################
# Setup python.
################################################################################

add_subdirectory(torch_mlir_e2e_test)
206 changes: 123 additions & 83 deletions projects/pt1/e2e_testing/main.py → projects/e2e/e2e_testing/main.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,20 +11,15 @@

torch.device("cpu")

from torch_mlir_e2e_test.framework import run_tests
from torch_mlir_e2e_test.framework import run_tests, TestConfig
from torch_mlir_e2e_test.reporting import report_results
from torch_mlir_e2e_test.registry import GLOBAL_TEST_REGISTRY


# Available test configs.
from torch_mlir_e2e_test.configs import (
LazyTensorCoreTestConfig,
FxImporterTestConfig,
NativeTorchTestConfig,
OnnxBackendTestConfig,
TorchScriptTestConfig,
TorchDynamoTestConfig,
JITImporterTestConfig,
FxImporterTestConfig,
)

from torch_mlir_e2e_test.linalg_on_tensors_backends.refbackend import (
Expand Down Expand Up @@ -65,42 +60,49 @@

register_all_tests()

DEPRECATED_CONFIGS = [
"torchscript",
"linalg",
"stablehlo",
"tosa",
"lazy_tensor_core",
"torchdynamo",
]

CONFIGS = [
"native_torch",
"onnx",
"onnx_tosa",
"fx_importer",
"fx_importer_stablehlo",
"fx_importer_tosa",
]


def _get_argparse():
config_choices = [
"native_torch",
"torchscript",
"linalg",
"stablehlo",
"tosa",
"lazy_tensor_core",
"torchdynamo",
"onnx",
"onnx_tosa",
"fx_importer",
"fx_importer_stablehlo",
"fx_importer_tosa",
]
config_choices = CONFIGS + DEPRECATED_CONFIGS
parser = argparse.ArgumentParser(description="Run torchscript e2e tests.")
parser.add_argument(
"-c",
"--config",
choices=config_choices,
default="linalg",
default="fx_importer",
help=f"""
Meaning of options:
"onnx": export to the model via onnx and reimport using the torch-onnx-to-torch path.
"fx_importer": run the model through the fx importer frontend and execute the graph using Linalg-on-Tensors.
"fx_importer_stablehlo": run the model through the fx importer frontend and execute the graph using Stablehlo backend.
"fx_importer_tosa": run the model through the fx importer frontend and execute the graph using the TOSA backend.
"onnx_tosa": Import ONNX to Torch via the torch-onnx-to-torch path and execute the graph using the TOSA backend.

The following options are deprecated:
"linalg": run through torch-mlir"s default Linalg-on-Tensors backend.
"tosa": run through torch-mlir"s default TOSA backend.
"stablehlo": run through torch-mlir"s default Stablehlo backend.
"native_torch": run the torch.nn.Module as-is without compiling (useful for verifying model is deterministic; ALL tests should pass in this configuration).
"torchscript": compile the model to a torch.jit.ScriptModule, and then run that as-is (useful for verifying TorchScript is modeling the program correctly).
"lazy_tensor_core": run the model through the Lazy Tensor Core frontend and execute the traced graph.
"torchdynamo": run the model through the TorchDynamo frontend and execute the graph using Linalg-on-Tensors.
"onnx": export to the model via onnx and reimport using the torch-onnx-to-torch path.
"fx_importer": run the model through the fx importer frontend and execute the graph using Linalg-on-Tensors.
"fx_importer_stablehlo": run the model through the fx importer frontend and execute the graph using Stablehlo backend.
"fx_importer_tosa": run the model through the fx importer frontend and execute the graph using the TOSA backend.
"onnx_tosa": Import ONNX to Torch via the torch-onnx-to-torch path and execute the graph using the TOSA backend.
""",
)
parser.add_argument(
Expand Down Expand Up @@ -143,63 +145,106 @@ def _get_argparse():
return parser


def _setup_config(
config: str, all_test_unique_names: set[str]
) -> tuple[TestConfig, set[str], set[str]]:
if config in DEPRECATED_CONFIGS:
return _setup_deprecated_config(config, all_test_unique_names)
if config == "native_torch":
return (
NativeTorchTestConfig(),
set(),
set(),
)
if config == "fx_importer":
return (
FxImporterTestConfig(RefBackendLinalgOnTensorsBackend()),
FX_IMPORTER_XFAIL_SET,
FX_IMPORTER_CRASHING_SET,
)
if config == "fx_importer_stablehlo":
return (
FxImporterTestConfig(LinalgOnTensorsStablehloBackend(), "stablehlo"),
FX_IMPORTER_STABLEHLO_XFAIL_SET,
FX_IMPORTER_STABLEHLO_CRASHING_SET,
)
if config == "fx_importer_tosa":
return (
FxImporterTestConfig(LinalgOnTensorsTosaBackend(), "tosa"),
FX_IMPORTER_TOSA_XFAIL_SET,
FX_IMPORTER_TOSA_CRASHING_SET,
)
if config == "onnx":
return (
OnnxBackendTestConfig(RefBackendLinalgOnTensorsBackend()),
ONNX_XFAIL_SET,
ONNX_CRASHING_SET,
)
if config == "onnx_tosa":
return (
OnnxBackendTestConfig(LinalgOnTensorsTosaBackend(), output_type="tosa"),
ONNX_TOSA_XFAIL_SET,
ONNX_TOSA_CRASHING_SET,
)
raise ValueError(f'Got invalid config, "{config}". Choices: {CONFIGS}')


def _setup_deprecated_config(
config: str, all_test_unique_names: set[str]
) -> tuple[TestConfig, set[str], set[str]]:
print(f"Warning: the selected config, '{config}', is not actively supported.")
import torch_mlir_e2e_test.pt1_configs as _configs

if config == "linalg":
return (
_configs.JITImporterTestConfig(RefBackendLinalgOnTensorsBackend()),
LINALG_XFAIL_SET,
LINALG_CRASHING_SET,
)
if config == "stablehlo":
return (
_configs.JITImporterTestConfig(
LinalgOnTensorsStablehloBackend(), "stablehlo"
),
all_test_unique_names - STABLEHLO_PASS_SET,
STABLEHLO_CRASHING_SET,
)
if config == "tosa":
return (
_configs.JITImporterTestConfig(LinalgOnTensorsTosaBackend(), "tosa"),
all_test_unique_names - TOSA_PASS_SET,
TOSA_CRASHING_SET,
)
if config == "torchscript":
return (
_configs.TorchScriptTestConfig(),
set(),
set(),
)
if config == "lazy_tensor_core":
return (
_configs.LazyTensorCoreTestConfig(),
LTC_XFAIL_SET,
LTC_CRASHING_SET,
)
if config == "torchdynamo":
return (
_configs.TorchDynamoTestConfig(
RefBackendLinalgOnTensorsBackend(generate_runtime_verification=False)
),
TORCHDYNAMO_XFAIL_SET,
TORCHDYNAMO_CRASHING_SET,
)
raise ValueError(f"Unhandled config {config}.")


def main():
args = _get_argparse().parse_args()

all_test_unique_names = set(test.unique_name for test in GLOBAL_TEST_REGISTRY)

# Find the selected config.
if args.config == "linalg":
config = JITImporterTestConfig(RefBackendLinalgOnTensorsBackend())
xfail_set = LINALG_XFAIL_SET
crashing_set = LINALG_CRASHING_SET
elif args.config == "stablehlo":
config = JITImporterTestConfig(LinalgOnTensorsStablehloBackend(), "stablehlo")
xfail_set = all_test_unique_names - STABLEHLO_PASS_SET
crashing_set = STABLEHLO_CRASHING_SET
elif args.config == "tosa":
config = JITImporterTestConfig(LinalgOnTensorsTosaBackend(), "tosa")
xfail_set = all_test_unique_names - TOSA_PASS_SET
crashing_set = TOSA_CRASHING_SET
elif args.config == "native_torch":
config = NativeTorchTestConfig()
xfail_set = set()
crashing_set = set()
elif args.config == "torchscript":
config = TorchScriptTestConfig()
xfail_set = set()
crashing_set = set()
elif args.config == "lazy_tensor_core":
config = LazyTensorCoreTestConfig()
xfail_set = LTC_XFAIL_SET
crashing_set = LTC_CRASHING_SET
elif args.config == "fx_importer":
config = FxImporterTestConfig(RefBackendLinalgOnTensorsBackend())
xfail_set = FX_IMPORTER_XFAIL_SET
crashing_set = FX_IMPORTER_CRASHING_SET
elif args.config == "fx_importer_stablehlo":
config = FxImporterTestConfig(LinalgOnTensorsStablehloBackend(), "stablehlo")
xfail_set = FX_IMPORTER_STABLEHLO_XFAIL_SET
crashing_set = FX_IMPORTER_STABLEHLO_CRASHING_SET
elif args.config == "fx_importer_tosa":
config = FxImporterTestConfig(LinalgOnTensorsTosaBackend(), "tosa")
xfail_set = FX_IMPORTER_TOSA_XFAIL_SET
crashing_set = FX_IMPORTER_TOSA_CRASHING_SET
elif args.config == "torchdynamo":
# TODO: Enanble runtime verification and extend crashing set.
config = TorchDynamoTestConfig(
RefBackendLinalgOnTensorsBackend(generate_runtime_verification=False)
)
xfail_set = TORCHDYNAMO_XFAIL_SET
crashing_set = TORCHDYNAMO_CRASHING_SET
elif args.config == "onnx":
config = OnnxBackendTestConfig(RefBackendLinalgOnTensorsBackend())
xfail_set = ONNX_XFAIL_SET
crashing_set = ONNX_CRASHING_SET
elif args.config == "onnx_tosa":
config = OnnxBackendTestConfig(LinalgOnTensorsTosaBackend(), output_type="tosa")
xfail_set = ONNX_TOSA_XFAIL_SET
crashing_set = ONNX_TOSA_CRASHING_SET
config, xfail_set, crashing_set = _setup_config(args.config, all_test_unique_names)

do_not_attempt = set(
args.crashing_tests_to_not_attempt_to_run_and_a_bug_is_filed or []
Expand Down Expand Up @@ -231,11 +276,6 @@ def main():

# Report the test results.
failed = report_results(results, xfail_set, args.verbose, args.config)
if args.config == "torchdynamo":
print(
"\033[91mWarning: the TorchScript based dynamo support is deprecated. "
"The config for torchdynamo is planned to be removed in the future.\033[0m"
)
if args.ignore_failures:
sys.exit(0)
sys.exit(1 if failed else 0)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@
# might be used to keep more elaborate sets of testing configurations).

from torch_mlir_e2e_test.test_suite import COMMON_TORCH_MLIR_LOWERING_XFAILS
from torch_mlir._version import torch_version_for_comparison, version
from torch_mlir_e2e_test.utils import torch_version_for_comparison, version

print(f"TORCH_VERSION_FOR_COMPARISON =", torch_version_for_comparison())

Expand Down
File renamed without changes.
8 changes: 8 additions & 0 deletions projects/e2e/torch_mlir_e2e_test/configs/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
# Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
# See https://llvm.org/LICENSE.txt for license information.
# SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
# Also available under a BSD-style license. See LICENSE.

from .fx_importer_backend import FxImporterTestConfig
from .native_torch import NativeTorchTestConfig
from .onnx_backend import OnnxBackendTestConfig
Loading
Loading