You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I download the quantize-aware training int8 model(saved_model.pb) from repo google-research/google-research.And I use run_squad.py which offered in the repo with tensorflow 1.15. https://github.com/google-research/google-research/tree/master/mobilebert/run_squad.py.I also try to convert this model with tensorflow 2.2/2.3/2.5/2.6/2.8-nightly.But it didn't work anymore.
You mention about that:
The TensorFlow Lite models are:
quant.tflite - Quantized (int8, per-channel) .tflite model.
quant_nnapi.tflite - Quantized (int8, per-channel) .tflite model with several mathematically equivalent op replacements for NNAPI compatibility.
So I was thinking how do you convert the model to mobilebert_int8_384.tflite and mobilebert_int8_384_nnapi.tflite?
Could you please provide some information to help convert the model?
The text was updated successfully, but these errors were encountered:
I download the quantize-aware training int8 model(saved_model.pb) from repo
google-research/google-research
.And I use run_squad.py which offered in the repo with tensorflow 1.15.https://github.com/google-research/google-research/tree/master/mobilebert/run_squad.py
.I also try to convert this model with tensorflow 2.2/2.3/2.5/2.6/2.8-nightly.But it didn't work anymore.You mention about that:
So I was thinking how do you convert the model to mobilebert_int8_384.tflite and mobilebert_int8_384_nnapi.tflite?
Could you please provide some information to help convert the model?
The text was updated successfully, but these errors were encountered: