-
Notifications
You must be signed in to change notification settings - Fork 44
Issues: google-ai-edge/ai-edge-torch
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Request for tflite full int8 static quantization with calibration dataset
status:awaiting ai-edge-developer
type:feature
For feature requests
type:quantization
For issues related to quantization
#296
opened Oct 15, 2024 by
hayyaw
Conversion fails on custom models (DCGAN, pix2pix)
status:awaiting ai-edge-developer
type:bug
Bug
type:precision/accuracy
For issues where the precision/accuracy appear incorrect
#295
opened Oct 11, 2024 by
jchwenger
Error Using Converted Phi-3.5-mini TFLite in Android App
status:awaiting ai-edge-developer
type:feature
For feature requests
#293
opened Oct 11, 2024 by
chienhuikuo
Torch Bug
unsqueeze
/ reshape
dimensions not preserved upon TFLite conversion
status:awaiting ai-edge-developer
type:bug
#288
opened Oct 8, 2024 by
gudgud96
Getting an error while generating quantized tiny llama model due to AI-edge-quantizer
status:awaiting user response
When awaiting user response
status:more data needed
This label needs to be added to stale issues and PRs.
type:bug
Bug
#286
opened Oct 7, 2024 by
pallaviNNT
Unable to convert Qwen2.5 to mediapipe supported format
status:awaiting ai-edge-developer
type:support
For use-related issues
#275
opened Oct 1, 2024 by
atultiwari
Converting MaxPool2D with dynamic spatial dimensions crashes
status:awaiting ai-edge-developer
type:bug
Bug
#270
opened Sep 30, 2024 by
sc-aharri
Not able to convert Llama 3.2 1B Instruct to Tflite format
status:awaiting ai-edge-developer
type:bug
Bug
#269
opened Sep 29, 2024 by
atultiwari
Error when exporting a model that uses Bug
torch.sum()
status:awaiting ai-edge-developer
type:bug
#268
opened Sep 29, 2024 by
rishi-menon
Unable to resolve runtime symbol: `xla_mark_tensor'.
status:awaiting ai-edge-developer
type:bug
Bug
#265
opened Sep 27, 2024 by
Clod98
Problems with quantization
status:awaiting ai-edge-developer
type:bug
Bug
#264
opened Sep 27, 2024 by
spacycoder
Trace model in model-explorer
status:awaiting user response
When awaiting user response
type:support
For use-related issues
#254
opened Sep 24, 2024 by
nigelzzzzzzz
Replace Int64 with Int32 for edge
type:support
For use-related issues
#246
opened Sep 21, 2024 by
rfechtner
data_ptr_value % kDefaultTensorAlignment == 0 was not true.
status:awaiting ai-edge-developer
status:awaiting review
Awaiting PR review
status:contribution welcome
type:bug
Bug
#237
opened Sep 18, 2024 by
nigelzzzzzzz
Different with gemma2 / gemma
status:awaiting ai-edge-developer
type:bug
Bug
#236
opened Sep 18, 2024 by
nigelzzzzzzz
Encountered 'Redefinition of symbol: gelu_decomp_27' issue while converting Qwen2 model to TFLite
status:awaiting user response
When awaiting user response
status:stale
type:bug
Bug
#235
opened Sep 18, 2024 by
tilfdev
Conversion fails on model loaded via torch.load or torch.jit.load
status:awaiting user response
When awaiting user response
type:support
For use-related issues
#221
opened Sep 13, 2024 by
saseptim
OOM Error in Gemini 2 2B TFLite Conversion with Quantization on 80GB RAM
status:awaiting ai-edge-developer
type:memory
An issue with memory, memory performance, or memory leaks
#192
opened Sep 5, 2024 by
KennethanCeyer
Tiny-llama Encountered unresolved custom op: odml.update_kv_cache
type:bug
Bug
#175
opened Aug 28, 2024 by
vignesh-spericorn
int8 tflite conversion crashes
status:awaiting ai-edge-developer
type:feature
For feature requests
#150
opened Aug 15, 2024 by
codewarrior26
Tensor Shape Mismatch During TFLite Quantization Conversion
type:bug
Bug
#137
opened Aug 8, 2024 by
spacycoder
quant_config Dtype INT16 support request
type:feature
For feature requests
#136
opened Aug 8, 2024 by
ZORO-Q
text_generator_main.cc using tinyllama model to inference can show Garbled characters
status:awaiting ai-edge-developer
type:bug
Bug
#109
opened Jul 26, 2024 by
nigelzzz
Converting Torch modules that use the max function
status:awaiting review
Awaiting PR review
type:bug
Bug
#81
opened Jul 5, 2024 by
hbellafkir
Previous Next
ProTip!
What’s not been updated in a month: updated:<2024-09-16.