You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Checked the issue tracker for similar issues to ensure this is not a duplicate.
Provided a clear description of your suggestion.
Included any relevant context or examples.
Issue or Suggestion Description
1.mobilenetv3_small_float32.tflite量化成int8后精度下降严重,基本无法用来识别
2.mobilenetv3_small_float32.tflite不量化模型大的导致esp32s3无法加载
3.mobilenetv3_small_float32.tflite量化float16后s3加载模型报错,大小凑合
4.mobilenetv3_small_float32.tflite采用混合动态量化后,大小合适,加载模型报错
/IDF/examples/camera_my_debug/esp-tflite-micro/tensorflow/lite/micro/kernels/esp_nn/conv.cc Hybrid models are not supported on TFLite Micro.
5.麻烦看看有什么方案可以在esp32s3上正常运行mobilenetv3
The text was updated successfully, but these errors were encountered:
github-actionsbot
changed the title
mobilenetv3_small_float32.tflite量化成int8后精度下降严重
mobilenetv3_small_float32.tflite量化成int8后精度下降严重 (TFMIC-41)
Nov 7, 2024
Hello @xiaojianjun22 I see that, for MobileNetV3, the model quantisation to int8 is problematic. I have not found a solution for this yet.
You are right about float16 model being good at accuracy. I will not, however, advocate in favour of this and as you found out tflite-micro does not support hybrid models. Can you raise this question of tflite-micro forum, so that it could be helped better?
Hello again @xiaojianjun22 even after spending considerable amount of time on getting mobilenetv3_small quantization to work, I couldn't get it working. (Couldn't get it work with QAT and bad accuracy with PTQ int8).
I think, PyTorch tools here can be used somehow to get it work: https://pytorch.org/tutorials/advanced/static_quantization_tutorial.html
I understand that this is not exactly what you are looking for but what I was able to get lot of help and good accuracy for quantised model was with MobileNetV2. I would recommend you to use the V2 if you're not fixated on V3.
I am attaching the python file here which I developed over time which you can use to do conversions and testing the models as well. (Please change quantize_tflite_model.txt to .py extension) quantize_tflite_model.txt requirements.txt
Please share if you get the precise solution for MobileNetV3.
Checklist
Issue or Suggestion Description
1.mobilenetv3_small_float32.tflite量化成int8后精度下降严重,基本无法用来识别
![083cd9b7d6a75218ef2b9d07b90fcfc](https://private-user-images.githubusercontent.com/25838954/384002184-2d07aa6b-8d0d-440b-82f8-f3cf5626e145.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MzkxNjQ0MjMsIm5iZiI6MTczOTE2NDEyMywicGF0aCI6Ii8yNTgzODk1NC8zODQwMDIxODQtMmQwN2FhNmItOGQwZC00NDBiLTgyZjgtZjNjZjU2MjZlMTQ1LnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNTAyMTAlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjUwMjEwVDA1MDg0M1omWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPWU1NWZkNmRlMWM3NDE0YTQ5NjRkYmQ2N2U1NzAzYjljZDJlYzcxNWE5ZTlkYmEyZWI5MjBlZmM4ZGE0MWE0ZmEmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.KwIAa-JyNUy0Kb6ugEuhiufBl5CEZlN3_qWX07SYhUM)
2.mobilenetv3_small_float32.tflite不量化模型大的导致esp32s3无法加载
3.mobilenetv3_small_float32.tflite量化float16后s3加载模型报错,大小凑合
4.mobilenetv3_small_float32.tflite采用混合动态量化后,大小合适,加载模型报错
/IDF/examples/camera_my_debug/esp-tflite-micro/tensorflow/lite/micro/kernels/esp_nn/conv.cc Hybrid models are not supported on TFLite Micro.
5.麻烦看看有什么方案可以在esp32s3上正常运行mobilenetv3
The text was updated successfully, but these errors were encountered: