-
-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: Function Calling Not Working with New o1 Model via litellm #7292
Comments
acknowledging this - thanks for the issue. will work on it today. |
Going through list of openai params and seeing what is not yet supported by o1
|
o1-mini also doesn't support the however this now works for o1 |
"latest o1 model supports both text and image inputs"
|
also need to map new 'developer' role to 'system' role for non-openai providers |
|
…1-preview o1 currently doesn't support streaming, but the other model versions do Fixes #7292
…ported params if model map says so Fixes #7292
I thought the new |
we're looking to upgrade |
here's what i see @afbarbaro it works for:
working on adding fake streaming for o1, so it doesn't cause issues in any client code i assume they'll eventually roll out real streaming for o1 |
Allows o1 calls to be faked for just the "o1" model, allows native streaming for o1-mini, o1-preview Fixes #7292
Hi @krrishdholakia , Is it possible that I still get the error related to the function calling when using o1, with the latest litellm version (v1.55.4)?
|
Hello! I'm having the same issue as @mvrodrig
|
this fix went out right now on v1.55.6 please bump and let me know if the issue persists |
Thanks @krrishdholakia, I confirm the issue is solved! |
What happened?
A bug happened!
Hi @krrishdholakia,
I'm encountering an issue when trying to use function calling with the new o1 model (2024-12-17 version) through litellm. The same request works perfectly fine when calling the OpenAI API directly, but fails when using litellm.
Steps to Reproduce:
Expected Result: The OpenAI API responds successfully with a function call:
Actual Result: I receive the following error:
drop_params: true
as suggested. When I set:I then receive the following response:
Of course this indicates that the function call capability is lost and the assistant only returns a generic response without calling the function.
Any guidance on how to properly format the request or if there’s a workaround would be greatly appreciated.
Thanks!
Relevant log output
Are you a ML Ops Team?
No
What LiteLLM version are you on ?
v1.55.3
Twitter / LinkedIn details
No response
The text was updated successfully, but these errors were encountered: