You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Aug 30, 2018. It is now read-only.
I try to run this tutorial:Example: End-to-end AlexNet from PyTorch to Caffe2
However, I found the reference speed of onnx-caffe2 is 10x slower than the origin pytorch model.
Anyone help? Thanks. If the reference time is comparable, It would be great to deploy models of Pytorch using Caffe2.
My Machine:
Ubuntu 14.04
CUDA 8.0
cudnn 7.0.3
Caffe2 latest
Pytorch 0.3.0