-
Notifications
You must be signed in to change notification settings - Fork 67
Issues: vllm-project/llm-compressor
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Qunatize glm-4v-9b with INT8 Quantization
bug
Something isn't working
#1003
opened Dec 20, 2024 by
citrix123
Sequential_update flag not reducing GPU memory usage
bug
Something isn't working
#995
opened Dec 19, 2024 by
hafezmg48
A tutorial doc for how to use the sparse prunings
documentation
Improvements or additions to documentation
#993
opened Dec 18, 2024 by
hafezmg48
When llava-v1.6 models w8a8_int8 quantization can be supported?
enhancement
New feature or request
#990
opened Dec 18, 2024 by
wuyu1028
The new version 0.3.0 takes a long time for quantization and eventually fails due to OOM
bug
Something isn't working
#965
opened Dec 10, 2024 by
okwinds
Error when quantizing LLama 3.3 70b to FP8
bug
Something isn't working
#963
opened Dec 6, 2024 by
Syst3m1cAn0maly
How to recover stage quantization from finetuning stage after an error
bug
Something isn't working
#957
opened Dec 5, 2024 by
jiangjiadi
About lora finetuning of 2:4 sparse and sparse quant models
enhancement
New feature or request
#952
opened Dec 4, 2024 by
arunpatala
quantization + sparsification - model outputs zeros
bug
Something isn't working
#942
opened Nov 28, 2024 by
nirey10
Finetuning in 2:4 sparsity w4a16 example fails with multiple GPUs
bug
Something isn't working
#911
opened Nov 13, 2024 by
arunpatala
Is it possible to quantize to FP8 W8A16 without calibration data
enhancement
New feature or request
#858
opened Oct 21, 2024 by
us58
Perplexity (ppl) Calculation of Local Sparse Model: NaN issue
bug
Something isn't working
#853
opened Oct 19, 2024 by
HengJayWang
[USAGE] FP8 W8A8 (+KV) with LORA Adapters
enhancement
New feature or request
#164
opened Sep 11, 2024 by
paulliwog
Yaml parsing fails with a custom mapping provided to SmoothQuantModifier recipe
bug
Something isn't working
#105
opened Aug 22, 2024 by
aatkinson
Layers not skipped with ignore=[ "re:.*"]
bug
Something isn't working
#91
opened Aug 15, 2024 by
horheynm
ProTip!
Find all open issues with in progress development work with linked:pr.