Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"API call didn't return a message" error repeating very frequently in v5.5, with openai. #2255

Open
2 of 5 tasks
HG2407 opened this issue Dec 15, 2024 · 5 comments
Open
2 of 5 tasks

Comments

@HG2407
Copy link

HG2407 commented Dec 15, 2024

Describe the bug
A clear and concise description of what the bug is.
The issue is "API call didn't return a message", error is repeating very frequently with letta v5.5. I am using openai's gpt 4o. The problem is that the inner monologue is displayed everytime on the ui but there is no actual user response given. I have been able to debug the issue. It is because in the normal response, when everything is correct, the message.choices contains tool calls for send message. But whenever this issue arises the message.choices doesn't contain any function call. It should have been fixed according to a previous closed bug. But it is still there and is very frustating. I am attaching the logs as well.

correct normal response:
original response choices in unpack_all_inner_thoughts_from_kwargs: {
    "id": "message-fd4674bd-0943-4372-a1bd-55f84da2175d",
    "choices": [
        {
            "finish_reason": "tool_calls",
            "index": 0,
            "message": {
                "content": null,
                "tool_calls": [
                    {
                        "id": "6b6fe6f1-3edb-4a3a-b775-e0edd",
                        "type": "function",
                        "function": {
                            "arguments": "{\"inner_thoughts\":\"Keeping the interaction engaging. Ensuring consistency in responding to Harshit.\",\"message\":\"Hey there, Harshit! Let me know if there's anything specific you need or if you'd just like to chat.\"}",
                            "name": "send_message"
                        }
                    }
                ],
                "role": "assistant",
                "function_call": null
            },
            "logprobs": null,
            "seed": null
        }
    ],
    "created": "2024-12-15T19:25:37.356338Z",
    "model": "gpt-4o-2024-08-06",
    "system_fingerprint": "fp_a79d8dac1f",
    "object": "chat.completion",
    "usage": {
        "completion_tokens": 48,
        "prompt_tokens": 15582,
        "total_tokens": 15630
    }
}
_get_ai_reply response: {
    "id": "message-fd4674bd-0943-4372-a1bd-55f84da2175d",
    "choices": [
        {
            "finish_reason": "tool_calls",
            "index": 0,
            "message": {
                "content": "Keeping the interaction engaging. Ensuring consistency in responding to Harshit.",
                "tool_calls": [
                    {
                        "id": "6b6fe6f1-3edb-4a3a-b775-e0edd",
                        "type": "function",
                        "function": {
                            "arguments": "{\n  \"message\": \"Hey there, Harshit! Let me know if there's anything specific you need or if you'd just like to chat.\"\n}",
                            "name": "send_message"
                        }
                    }
                ],
                "role": "assistant",
                "function_call": null
            },
            "logprobs": null,
            "seed": null
        }
    ],
    "created": "2024-12-15T19:25:37.356338Z",
    "model": "gpt-4o-2024-08-06",
    "system_fingerprint": "fp_a79d8dac1f",
    "object": "chat.completion",
    "usage": {
        "completion_tokens": 48,
        "prompt_tokens": 15582,
        "total_tokens": 15630
    }
}
error causing response:
original response choices in unpack_all_inner_thoughts_from_kwargs: {
    "id": "message-4c44f4dd-95d5-45c1-ac8f-0c118a5b2967",
    "choices": [
        {
            "finish_reason": "stop",
            "index": 0,
            "message": {
                "content": "Hello again, Harshit! 👋 How can I assist you today? 😊",
                "tool_calls": null,
                "role": "assistant",
                "function_call": null
            },
            "logprobs": null,
            "seed": null
        }
    ],
    "created": "2024-12-15T19:20:23.558565Z",
    "model": "gpt-4o-2024-08-06",
    "system_fingerprint": "fp_a79d8dac1f",
    "object": "chat.completion",
    "usage": {
        "completion_tokens": 17,
        "prompt_tokens": 8289,
        "total_tokens": 8306
    }
}
_get_ai_reply response: {
    "id": "message-4c44f4dd-95d5-45c1-ac8f-0c118a5b2967",
    "choices": [
        null
    ],
    "created": "2024-12-15T19:20:23.558565Z",
    "model": "gpt-4o-2024-08-06",
    "system_fingerprint": "fp_a79d8dac1f",
    "object": "chat.completion",
    "usage": {
        "completion_tokens": 17,
        "prompt_tokens": 8289,
        "total_tokens": 8306
    }
}

Please describe your setup

  • How did you install letta?
    • pip install letta? pip install letta-nightly? git clone?
    • git clone
  • Describe your setup
    • What's your OS (Windows/MacOS/Linux)?
    • Ubuntu
    • How are you running letta? (cmd.exe/Powershell/Anaconda Shell/Terminal)
    • poetry run letta server

Screenshots
If applicable, add screenshots to help explain your problem.
correct response:
Screenshot 2024-12-16 at 1 31 25 AM
error response:
Screenshot 2024-12-16 at 1 32 19 AM

Additional context
Add any other context about the problem here.
I have also modified the parameters of _get_ai_reply, a little:

def _get_ai_reply(
        self,
        message_sequence: List[Message],
        function_call: str = "auto",
        first_message: bool = False,  # hint
        stream: bool = False,  # TODO move to config?
        fail_on_empty_response: bool = True, # this was initially False
        empty_response_retry_limit: int = 0 # this was initially 3
    ) -> ChatCompletionResponse:

Letta Config
Please attach your ~/.letta/config file or copy past it below.


If you're not using OpenAI, please provide additional information on your local LLM setup:

Local LLM details

If you are trying to run Letta with local LLMs, please provide the following information:

  • The exact model you're trying to use (e.g. dolphin-2.1-mistral-7b.Q6_K.gguf)
  • The local LLM backend you are using (web UI? LM Studio?)
  • Your hardware for the local LLM backend (local computer? operating system? remote RunPod?)
@cpacker
Copy link
Collaborator

cpacker commented Dec 17, 2024

Are you able to reproduce this error in the latest version?

@HG2407
Copy link
Author

HG2407 commented Dec 19, 2024

Should I try with the latest version ? because this thing should have been fixed way before v5.5, so I am not sure.

@cpacker
Copy link
Collaborator

cpacker commented Dec 19, 2024

If you're able to reproduce with the latest version that would be great and help us debug the ticket faster (if the bug is still happening), whereas we're not able to actively debug issues for older version (eg v0.5.5)

@cpacker
Copy link
Collaborator

cpacker commented Dec 19, 2024

But whenever this issue arises the message.choices doesn't contain any function call

For example I don't think this should be happening when we use structured outputs, which should be on by default for gpt-4o-2024-08-06 in the latest version (but I'm not sure about v0.5.5).

@AliSayyah
Copy link
Contributor

AliSayyah commented Dec 20, 2024

I have the exact issue with version 0.6.5 for existing agents. new agents work fine.
btw, I'm using gpt-4o-mini.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants