Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LSTM Model causing Error: "Type INT32 (2) not supported" on trying to run #32

Closed
tms40 opened this issue Mar 9, 2023 · 4 comments
Closed

Comments

@tms40
Copy link

tms40 commented Mar 9, 2023

Similar to this issue.

I've been trying to get an LSTM Model running on my ESP 32 (Adafruit HUZZAH32 – ESP32 Feather Board) for some time, however have been completely stumped by this issue. The complete set of errors that occurs is:

Type INT32 (2) not supported.
Node ADD (number 4) failed to invoke with status 1
Node WHILE (number 2) failed to invoke with status 1

Which happens when calling interpreter->Invoke();

The code that I am working with is my own, but heavily based off of the micro_speech example. To avoid as much fuss as possible I am using an AllOpsResolver, so everything should be loaded, it just seems to be a type issue. I do not understand why it is occurring as I have quantised my model to int8 and am providing int8 data. I have even confirmed the model's own inputs think they are receiving int8 as opposed to int32, outputting input_model->type and output_model->type = 9, which according to TfLiteType refers to int8 as opposed to the 2 of int32. Looking at the model in Netron also shows the inputs and outputs are int8 as opposed to int32.

All code, for both my model and directory are here, key files linked in .zip file.
srcFile+Model.zip

The code itself does run, however the predictions are simply the first 10 inputs. The model expects a 20x8 Input array (flattened to a 160-length 1D array), and outputs a 1x10 array of time series predictions.

I am unsure if this error is as a result of my ESP32 Code or my model conversion process, help would be appreciated.

@vikramdattu
Copy link
Collaborator

Hi @ts532 this is not ESP32 specific. It's more to do with tflm does not support the OPs or OPs for the specific data types for which errors are produced.
Although, you have converted with options as input/output INT8, I have seen that the conversion still ends up using INT32 ops for certain OP versions, specifically when LSTMs are used.

Please find one more similar discussion here: #30 (comment)
As a remedy, you can manually add specific versions as I have done in the above patch.

You may also want to raise an issue or feature request here. Hope this helps.

@tms40
Copy link
Author

tms40 commented Mar 10, 2023

Hi there, thanks for the patch. Unfortunately despite applying its contents the error persists. To my understanding the Patch requires:
in tflite-lib/tensorflow/lite/micro/kernels/cast.cc

  • add case kTfLiteUInt8 within switch(input->type) just below case kTFLiteUInt8 on ~line 87.

in tflite-lib/tensorflow/lite/micro/kernels/floor_div.cc

  • add case kTfLiteInt32 within switch(input1->type) just above case kTfLiteFloat32 on ~line 113

in tflite-lib/tensorflow/lite/micro/kernels/sub.cc

  • add new case statement to switch(output->type) just above its default statement from ~line 129-149
    case kTfLiteInt32: //patch start
     { 
       if (need_broadcast) {
         tflite::reference_ops::BroadcastQuantSubSlow(
             op_params, tflite::micro::GetTensorShape(input1),
             tflite::micro::GetTensorData<int32_t>(input1),
             tflite::micro::GetTensorShape(input2),
             tflite::micro::GetTensorData<int32_t>(input2),
             tflite::micro::GetTensorShape(output),
             tflite::micro::GetTensorData<int32_t>(output));
       } else {
         tflite::reference_ops::Sub(
             op_params, tflite::micro::GetTensorShape(input1),
             tflite::micro::GetTensorData<int32_t>(input1),
             tflite::micro::GetTensorShape(input2),
             tflite::micro::GetTensorData<int32_t>(input2),
             tflite::micro::GetTensorShape(output),
             tflite::micro::GetTensorData<int32_t>(output));
       }
       break; 
     } //patch end
    
    
  • replace if (output->type == kTfLiteFloat32) with if (output->type == kTfLiteFloat32 || output->type == kTfLiteInt32) on ~line 171

As these edits pertained to sub.cc, as opposed to add.cc, which seems to be the source of my errors, I attempted to modify the changes made to sub.cc to work with add.cc, this resulted in the program now crashing rather than brings up errors. The Changes I made were:
in tflite-lib/tensorflow/lite/micro/kernels/add.cc

  • add new case statement to switch (output->type) from ~line 129 to 149
case kTfLiteInt32: { //patch start
      MicroPrintf("int32");
      if (need_broadcast) {
        reference_ops::BroadcastAdd4DSlow(op_params, tflite::micro::GetTensorShape(input1),
            tflite::micro::GetTensorData<int32_t>(input1),
            tflite::micro::GetTensorShape(input2),
            tflite::micro::GetTensorData<int32_t>(input2),
            tflite::micro::GetTensorShape(output),
            tflite::micro::GetTensorData<int32_t>(output));
      } else {
        reference_ops::Add(op_params, tflite::micro::GetTensorShape(input1),
            tflite::micro::GetTensorData<int32_t>(input1),
            tflite::micro::GetTensorShape(input2),
            tflite::micro::GetTensorData<int32_t>(input2),
            tflite::micro::GetTensorShape(output),
            tflite::micro::GetTensorData<int32_t>(output));
      }
      break;
    } //patch end
  • replace if (output->type == kTfLiteFloat32) with if (output->type == kTfLiteFloat32 || output->type == kTfLiteInt32) on ~line 171.

This edit results in the following backtrace error occurring:
Backtrace:0x40082f4d:0x3ffb0e500x40087c09:0x3ffb0e70 0x4008c805:0x3ffb0e90 0x400d9e73:0x3ffb0f10 0x400da311:0x3ffb0f30 0x400da4d7:0x3ffb0f80 0x400dab8a:0x3ffb1100 0x400fc65e:0x3ffb1120 0x400f70b3:0x3ffb1160 0x400fc65e:0x3ffb1180 0x400f79b6:0x3ffb11c0 0x400d332a:0x3ffb11e0 0x400fee76:0x3ffb2820

when decoded:

0x40082f4d: panic_abort at /home/runner/work/esp32-arduino-lib-builder/esp32-arduino-lib-builder/esp-idf/components/esp_system/panic.c line 402
0x40087c09: esp_system_abort at /home/runner/work/esp32-arduino-lib-builder/esp32-arduino-lib-builder/esp-idf/components/esp_system/esp_system.c line 128
0x4008c805: abort at /home/runner/work/esp32-arduino-lib-builder/esp32-arduino-lib-builder/esp-idf/components/newlib/abort.c line 46
0x400d9e53: tflite::MatchingDim(tflite::RuntimeShape const&, int, tflite::RuntimeShape const&, int) at Z:/My Files/PhDStuff/ESPArduino2/lib/tflite-lib/tensorflow/lite/kernels/internal/types.h line 265
0x400da2f1: tflite::reference_ops::Concatenation (tflite::ConcatenationParams const&, tflite::RuntimeShape const* const*, signed char const* const*, tflite::RuntimeShape const&, signed char*) at Z:/My Files/PhDStuff/ESPArduino2/lib/tflite-lib/tensorflow/lite/kernels/internal/reference/concatenation.h line 43
0x400da4b7: tflite::(anonymous namespace)::EvalUnquantized (TfLiteContext*, TfLiteNode*) at Z:/My Files/PhDStuff/ESPArduino2/lib/tflite-lib/tensorflow/lite/micro/kernels/kernel_util.h line 64
0x400dab6a: tflite::(anonymous namespace)::Eval(TfLiteContext*, TfLiteNode*) at Z:/My Files/PhDStuff/ESPArduino2/lib/tflite-lib/tensorflow/lite/micro/kernels/concatenation.cc line 232
0x400fc61a: tflite::MicroGraph::InvokeSubgraph(int) at Z:/My Files/PhDStuff/ESPArduino2/lib/tflite-lib/tensorflow/lite/micro/micro_graph.cc line 174
0x400f706f: tflite::(anonymous namespace)::Eval(TfLiteContext*, TfLiteNode*) at Z:/My Files/PhDStuff/ESPArduino2/lib/tflite-lib/tensorflow/lite/micro/kernels/while.cc line 107
0x400fc61a: tflite::MicroGraph::InvokeSubgraph(int) at Z:/My Files/PhDStuff/ESPArduino2/lib/tflite-lib/tensorflow/lite/micro/micro_graph.cc line 174
0x400f7972: tflite::MicroInterpreter::Invoke() at Z:/My Files/PhDStuff/ESPArduino2/lib/tflite-lib/tensorflow/lite/micro/micro_interpreter.cc line 286
0x400d331a: setup() at Z:/My Files/PhDStuff/ESPArduino2/src/main.cpp line 88
0x400fee32: loopTask(void*) at C:/Users/tms40/.platformio/packages/framework-arduinoespressif32/cores/esp32/main.cpp line 42

Is there any advice on a next step?

@tms40
Copy link
Author

tms40 commented Mar 19, 2023

Update: a potential solution to this error can be found here (TF-Micro)

@vikramdattu
Copy link
Collaborator

Closing this as the OP Add INT32 is now available with tensorflow/tflite-micro#1847

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants