New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
interpreter.llm.completion = custom_language_model seems not working #1182
Comments
这可能是官方文档的问题。 def custom_language_model(**params):
"""
OpenAI-compatible completions function (this one just echoes what the user said back).
"""
openai_message = params['messages']
users_content = openai_message[-1].get("content")
# To make it OpenAI-compatible, we yield this first:
yield {"delta": {"role": "assistant"}}
for character in users_content:
yield {"delta": {"content": character}} 你可以打印查看params的结构。 |
I ran into this problem myself. @Delva0 is correct with converting the function signature to include def custom_language_model(messages, model, stream, max_tokens): But yielding is not working for me at the moment. |
6 tasks
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Describe the bug
I attempted to run the routine for the custom model, which is supposed to just echo back what the user said, but it was not successful.
I attempted to make modifications to
interpreter.llm.completions = custom_language_model
But the parameters don't match up.
Reproduce
Run
https://docs.openinterpreter.com/language-models/custom-models
Expected behavior
(this one just echoes what the user said back)
Screenshots
No response
Open Interpreter version
0.2.4
Python version
3.11
Operating System name and version
win 11
Additional context
No response
The text was updated successfully, but these errors were encountered: