-
Notifications
You must be signed in to change notification settings - Fork 92
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question related to the model tuning #39
Comments
Hi @shawnricecake, LLM-Pruner is a structural method and thus produces a dense model after pruning. |
Hi, thanks for your reply, so, the model weights after merge the lora weights will be dense? the main contribution of paper is the structure pruning? Thanks |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Hi,
Great work first!
I am confused with the model tuning part.
According to the code, it seemed that you used the lora method.
This, in my opinion, will destroy the sparsity you have made in the original model after merging the lora weights to the model weights.
could you explain this?
Thanks
Shawn
The text was updated successfully, but these errors were encountered: