-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add more settings to CompletionOptions #13
Comments
Hey @TyDunn can I move forward with this one. please let me know. |
@ffshreyansh We'd love for you to contribute! Let's check with @sestinj about how to scope this one if it is still makes sense now |
@ffshreyansh This is definitely still open for contribution! I think one of the last parameters we'll want to add is |
Every LLM completion is passed a set of parameters in the
CompletionOptions
object.We currently support common settings like max_tokens, temperature, top_p, top_k, frequency_penalty, and presence_penalty, but are missing things like tail-free sampling or certain mirostat parameters.
Some model providers, like llama.cpp will accept these, and so it is only a matter of allowing the parameter to be passed in.
CompletionOptions
to have the parametercore/llm/llms
folder) have a function called_convertArgs
that turns theCompletionOptions
object into the request body expected by their API. For the providers taht support this parameter, make sure that it gets passed to the request. For other providers, take a look to make sure that this extraneous parameter doesn't get sent in the request.The text was updated successfully, but these errors were encountered: