-
Notifications
You must be signed in to change notification settings - Fork 822
Description
Describe the bug
Hi,
I am sorry to bother you, but I can't get an exported ONNX model to work at inference time as inside the engine.predict()
method. In particular, when I use the engine.fit(model, datamodule)
method, followed by engine.predict(model, datamodule)
I get a reasonable result, that seems correct.
When I try to use the exported ONNX model, or even the same torch based model without using the engine and the datamodule (a single forward pass), I get a different result that seems just wrong.
I have searched into the code base, but I did not manage to understand what is it that the Engine is doing differently.
I have attached a reproducible example based on the MVTechAD dataset. In the experiment.ipynb
notebook you can see that the ONNX and the pytorch model return the same result. The inference on the same image with the engine, however, yields a different result.
Note: In order to re-run the notebook it is necessary to re-download the dataset and to change the paths to the images inside the code.
Dataset
MVTecAD
Model
PADiM
Steps to reproduce the behavior
- Unzip the file
- Run
uv sync
anduv lock
to install the virtual env - Check the
experiment.ipynb
notebook
OS information
OS information:
- OS: Ubuntu 24.04
- Python version: 3.12.0
- Anomalib version: 2.1.0
- PyTorch version: 2.8.0
- CUDA/cuDNN version: 12.8
- GPU models and configuration: GeForce RTX 3070
Expected behavior
ONNX and Torch models should yield the same output as engine.predict()
, provided the input image is pre-processed correctly.
Screenshots


Pip/GitHub
pip
What version/branch did you use?
2.1.0
Configuration YAML
None
Logs
None
Code of Conduct
- I agree to follow this project's Code of Conduct