-
Notifications
You must be signed in to change notification settings - Fork 28
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Help wanted] How to get the result of executing model_exec.py? #133
Comments
I think it will show the error in console like this Looks like you can also specify |
Thanks! Have you encountered these two types of errors when executing the model_exec command? |
@Zeus1116 Can you retry with a fresh install of nnsmith? Thanks. |
BTW, if I use the tvm as the end, it will not appear the errors. I guess the error was caused by a version update of onnxruntime. |
So the world is not as perfect as every framework supports every model & operator in every data type :) Therefore, different frameworks have their own sets of support coverage -- that's what the message is saying -- we are testing the model over ONNXRuntime which does not support this particular operator & datatype. But no worry, we considered this when building NNSmith -- You can also specify that you want to generate models with ONNXRuntime in the command:
Meanwhile, it seems that you are interested in just doing fuzzing, then you can just try:
|
Thank you very much for your help! I am indeed very interested in using nnsmith for model testing, but based on my understanding of nnsmith, the fuzz command in nnsmith can only retain models that have errors in the backend compiler. My goal is to collect all the models generated by nnsmith during the fuzz process and their running results. |
How do I know if this model has caused an error after executing the "nnsmith.model_exec model.type=onnx backend.type=onnxruntime model.path=nnsmith_output/model.onnx", " nnsmith.model_exec model.type=onnx
backend.type=onnxruntime
model.path=nnsmith_output/model.onnx
cmp.with='{type:tvm, optmax:true, target:cpu}'"?
The text was updated successfully, but these errors were encountered: