Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error when flashing LSTM model onto ESP32 (TFMIC-20) #23

Open
ChicchiPhD opened this issue Sep 12, 2022 · 8 comments
Open

Error when flashing LSTM model onto ESP32 (TFMIC-20) #23

ChicchiPhD opened this issue Sep 12, 2022 · 8 comments

Comments

@ChicchiPhD
Copy link

Hi Developers, hi all!

I'm making some experiments with LSTM networks using Keras, TensorFlow (specifically, tf-nightly 2.11.0-dev20220911), and TensorFlowLite. On the model authoring side, I'm able to successfully build the model, apply compression techniques and get the equivalent Lite model, and the .cc file.

However, when it's time to flash the model onto the ESP32, I always get an error, typically of the kind "Failed to get registration from op code CUSTOM". Therefore, I've decided to do something simpler, such as 1) using an existing example from Keras LSTM fusion Codelab, 2) avoiding the adoption of compression techniques.

Despite such a try, I still face an error when flashing onto ESP32, which is the following:

Didn't find op for builtin opcode 'UNIDIRECTIONAL_SEQUENCE_LSTM' version '1'. An older version of this builtin might be supported. Are you using an old TFLite binary with a newer model?
Failed to get registration from op code UNIDIRECTIONAL_SEQUENCE_LSTM

To recap:

  • the existing example I'm using to generate the very simple model is from Keras LSTM fusion Codelab, which uses TensorFlow version 2.4;
  • I use espressif, toolchain version esp-idf-v4.4
  • my target is esp32;
  • my specific HW onto which I try to flash the model is ESP32-WROOM-32.

I'm stuck on trying to understand the source of the problem, i.e., if it is a matter of TensorFlow version, or espressif version, or something that is not yet supported on the espressif side. Can you please help me try to shed a light on this?

Here attached you can find the TFLite model and the .cc generated with xxd starting from the simple Keras LSTM fusion Codelab example if this can be of help to reproduce the error.

Please, let me know if you need any further information.

Chiara
material.zip

@vikramdattu
Copy link
Collaborator

Hi @ChicchiPhD UNIDIRECTIONAL_SEQUENCE_LSTM is supported on current version.

Can you please make sure you have registered this OP in your code?

Something like this:

  static tflite::MicroMutableOpResolver<5> micro_op_resolver;
  micro_op_resolver.AddReshape();
  micro_op_resolver.AddSoftmax();
  micro_op_resolver.AddFullyConnected();
  micro_op_resolver.AddUnidirectionalSequenceLSTM();

and check if this works?
Reference from example: https://github.com/espressif/tflite-micro-esp-examples/blob/2a93aa3106f181768d75f64bcac629f344a2ca22/examples/person_detection/main/main_functions.cc#L94

@ChicchiPhD
Copy link
Author

ChicchiPhD commented Sep 13, 2022 via email

@vikramdattu
Copy link
Collaborator

Hi @ChicchiPhD yes, you're right. AllOpsResolver should just work fine but code pulled in will be larger!

I used model from your model.cc with AllOpsResolver and this works fine.

I think you need to update your local code. Just checkout latest master from espressif/tflite-micro-esp-examples. This should solve your problem.

@ChicchiPhD
Copy link
Author

ChicchiPhD commented Sep 13, 2022 via email

@vikramdattu
Copy link
Collaborator

Hi @ChicchiPhD I was referring to updating tflite-micro-esp-examples repo.
If this is the problem with CUSTOM ops, it is not Espressif specific as it concerns core tflite-micro framework!

Previous model you attached (with UnidirectionalSequenceLSTM) works fine for me. I cannot find attachments in your recent message. I could take a look.

@ChicchiPhD
Copy link
Author

Hi @vikramdattu ,

Sorry, maybe uploading the attachment from the email failed, had to switch to github page to upload the zip folder again. Can you please check if you incur the same error with this file?

Thank you so much for your help.
material (1).zip

@vikramdattu
Copy link
Collaborator

Hey @ChicchiPhD sorry for late reply. Did you figure out yourself how to add custom ops? Do you still want me to try the model?

@chakib67100
Copy link

hi @vikramdattu sorry to disturb you. I have an issue like that when i try an implementation of my LSTM model on ESP32 : Failed to get registration from op code CUSTOM

AllocateTensors() failed
Model begin: ERROR
Guru Meditation Error: Core 0 panic'ed (Load access fault). Exception was unhandled.

There is the tflite model that ive convert into a char array for the implementation, can you help me with this please ? :
modele_sans_quantification.zip

@github-actions github-actions bot changed the title Error when flashing LSTM model onto ESP32 Error when flashing LSTM model onto ESP32 (TFMIC-20) Apr 3, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants