-
Notifications
You must be signed in to change notification settings - Fork 637
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add conversion from ONNX to PyTorch to allow cross conversion TensorFlow->ONNX->PyTorch #133
Comments
This also can be used to implement onnx to pytorch conversion Or is it implemented elsewhere? |
Hi @SuperSecureHuman, yep as you can see the |
I've started working (by directly importing the onnx2torch module). Not sure if I can assign myself to this because its gonna take me some time to understand the convertors api (and overall layout of all other apis). Edit: I also have 0 experience with onnx 😅 |
Thanks, your help is very appreciated! Of course take your time to understand the code, and if you have any questions about how it works, just ask ;) |
from nebullvm.operations.conversions.onnx import convert_onnx_to_torch
import onnx
onnx_model_path = '/home/venom/Downloads/mobilenetv2-12.onnx'
onnx_model = onnx.load(onnx_model_path)
output_file_path = '/home/venom/Downloads/model.pt'
device = 'cpu'
convert_onnx_to_torch(onnx_model, output_file_path, device) For now the convertion works without error Will try working further to improve it Meanwhile, it would be great, if you could edit parts of my commit to make it more "module like" :) |
Great! Maybe check also with another couple of models to see if they are converted without errors as well. After that you could implement also the method |
Update: Sorry for the delay import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
from nebullvm.operations.conversions.onnx import convert_onnx_to_torch
import onnx
onnx_model_path = '/home/venom/Downloads/mobilenetv2-7.onnx'
onnx_model = onnx.load(onnx_model_path)
output_file_path = '/home/venom/Downloads/model.pt'
device = 'cpu'
outfile = convert_onnx_to_torch(onnx_model, output_file_path, device)
if outfile is not None:
print("Converted successfully")
print(outfile)
else:
print("Conversion failed") Now, added error handling Commit - cb74ee8 |
onnx2torch module, has the requirement of opset version 9 (minimum) In the above example, the model used opset7 (hence failed) Trying to convert all the models in https://github.com/onnx/models Open To view all the models from ONNX repo
Filtering out non-quantised and models that match the min requirements Open To view Non-quantised, opset 9+ models
43 Failed, 24 Converted |
Workaround for opset version Onnx Version Conversion - Offical Docs Exampleimport onnx
from onnx import version_converter
import torch
from onnx2torch import convert
# Load the ONNX model
model = onnx.load("model.onnx")
# Convert the model to the target version
target_version = 13
converted_model = version_converter.convert_version(model, target_version)
# Convert to torch
torch_model = convert(converted_model)
torch.save(torch_model, "model.pt") This could be added to the docs, and in the error message |
At the moment we support only PyTorch to ONNX and TensorFlow to ONNX conversions. We could test and use this repo to convert an ONNX model to PyTorch in order to support TensorFlow to PyTorch conversion.
The text was updated successfully, but these errors were encountered: