Replies: 1 comment
-
|
I had a similar problem, but in my case it happened on my Jetson device. To fix it, I simply rebooted the Jetson, and then it started showing the inference results from the ONNX model. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi! I am having difficulties converting to ONNX. The conversion goes successfully, I can inspect it using NETRON. However, when I try to run inference on it, it does not find anything (while when doing it with the .pt weights work perfectly). Every image preprocess step has been done in the same way for .pt and ONNX, but no changes. Conversion has been tried with export_model and torch.onnx.export and both give the same results. Ideas?
Beta Was this translation helpful? Give feedback.
All reactions