Skip to content
This repository has been archived by the owner on Dec 14, 2023. It is now read-only.

Normal finetuning instead of LoRA #149

Open
julkaztwittera opened this issue Nov 18, 2023 · 0 comments
Open

Normal finetuning instead of LoRA #149

julkaztwittera opened this issue Nov 18, 2023 · 0 comments

Comments

@julkaztwittera
Copy link

Is there a way I can set the train config to do a normal finetuning on a large dataset instead of LoRA?

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant