-
Notifications
You must be signed in to change notification settings - Fork 96
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
LSTM Model causing Error: "Type INT32 (2) not supported" on trying to run #32
Comments
Hi @ts532 this is not ESP32 specific. It's more to do with tflm does not support the OPs or OPs for the specific data types for which errors are produced. Please find one more similar discussion here: #30 (comment) You may also want to raise an issue or feature request here. Hope this helps. |
Hi there, thanks for the patch. Unfortunately despite applying its contents the error persists. To my understanding the Patch requires:
in tflite-lib/tensorflow/lite/micro/kernels/floor_div.cc
in tflite-lib/tensorflow/lite/micro/kernels/sub.cc
As these edits pertained to sub.cc, as opposed to add.cc, which seems to be the source of my errors, I attempted to modify the changes made to sub.cc to work with add.cc, this resulted in the program now crashing rather than brings up errors. The Changes I made were:
This edit results in the following backtrace error occurring: when decoded:
Is there any advice on a next step? |
Update: a potential solution to this error can be found here (TF-Micro) |
Closing this as the OP Add INT32 is now available with tensorflow/tflite-micro#1847 |
Similar to this issue.
I've been trying to get an LSTM Model running on my ESP 32 (Adafruit HUZZAH32 – ESP32 Feather Board) for some time, however have been completely stumped by this issue. The complete set of errors that occurs is:
Which happens when calling interpreter->Invoke();
The code that I am working with is my own, but heavily based off of the micro_speech example. To avoid as much fuss as possible I am using an AllOpsResolver, so everything should be loaded, it just seems to be a type issue. I do not understand why it is occurring as I have quantised my model to int8 and am providing int8 data. I have even confirmed the model's own inputs think they are receiving int8 as opposed to int32, outputting input_model->type and output_model->type = 9, which according to TfLiteType refers to int8 as opposed to the 2 of int32. Looking at the model in Netron also shows the inputs and outputs are int8 as opposed to int32.
All code, for both my model and directory are here, key files linked in .zip file.
srcFile+Model.zip
The code itself does run, however the predictions are simply the first 10 inputs. The model expects a 20x8 Input array (flattened to a 160-length 1D array), and outputs a 1x10 array of time series predictions.
I am unsure if this error is as a result of my ESP32 Code or my model conversion process, help would be appreciated.
The text was updated successfully, but these errors were encountered: