Replies: 7 comments 7 replies
-
Hi @hi019 this is not a bug, it's expected behaviour since openai's pydantic object expects content if not none, to be a non-empty string. What would you expect to happen here? |
Beta Was this translation helpful? Give feedback.
-
closing as this isn't a "bug", but i'm open to your feedback on this @hi019 |
Beta Was this translation helpful? Give feedback.
-
I was able to replicate this, here is more on this: The The Entire Traceback: status_code: 200
Logging Details: logger_fn - None | callable(logger_fn) - False
Logging Details LiteLLM-Failure Call
python-BaseException
Traceback (most recent call last):
File "/Users/rajan/work/personal/adelaide/sem1/spi/try-lite-llm/.venv/lib/python3.9/site-packages/litellm/main.py", line 320, in acompletion
response = await init_response
File "/Users/rajan/work/personal/adelaide/sem1/spi/try-lite-llm/.venv/lib/python3.9/site-packages/litellm/llms/anthropic.py", line 306, in acompletion_function
return self.process_response(
File "/Users/rajan/work/personal/adelaide/sem1/spi/try-lite-llm/.venv/lib/python3.9/site-packages/litellm/llms/anthropic.py", line 143, in process_response
raise AnthropicError(
litellm.llms.anthropic.AnthropicError: No content in response
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Applications/PyCharm.app/Contents/plugins/python/helpers-pro/pydevd_asyncio/pydevd_nest_asyncio.py", line 135, in run
return loop.run_until_complete(task)
File "/Applications/PyCharm.app/Contents/plugins/python/helpers-pro/pydevd_asyncio/pydevd_nest_asyncio.py", line 238, in run_until_complete
return f.result()
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/asyncio/futures.py", line 201, in result
raise self._exception
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/asyncio/tasks.py", line 256, in __step
result = coro.send(None)
File "/Users/rajan/work/personal/adelaide/sem1/spi/try-lite-llm/another.py", line 10, in test_get_response
resp = await litellm.acompletion(
File "/Users/rajan/work/personal/adelaide/sem1/spi/try-lite-llm/.venv/lib/python3.9/site-packages/litellm/utils.py", line 3697, in wrapper_async
raise e
File "/Users/rajan/work/personal/adelaide/sem1/spi/try-lite-llm/.venv/lib/python3.9/site-packages/litellm/utils.py", line 3529, in wrapper_async
result = await original_function(*args, **kwargs)
File "/Users/rajan/work/personal/adelaide/sem1/spi/try-lite-llm/.venv/lib/python3.9/site-packages/litellm/main.py", line 341, in acompletion
raise exception_type(
File "/Users/rajan/work/personal/adelaide/sem1/spi/try-lite-llm/.venv/lib/python3.9/site-packages/litellm/utils.py", line 9120, in exception_type
raise e
File "/Users/rajan/work/personal/adelaide/sem1/spi/try-lite-llm/.venv/lib/python3.9/site-packages/litellm/utils.py", line 8051, in exception_type
raise APIError(
litellm.exceptions.APIError: AnthropicException - No content in response. Handle with `litellm.APIError`.
Ran following in the console:
original_exception
AnthropicError('No content in response')
original_exception.status_code
200 Which was thrown from elif len(completion_response["content"]) == 0:
raise AnthropicError(
message="No content in response",
status_code=response.status_code,
) Since the status code from Anthropic is 200, If we want consistency and always expect our library to give a consistent output across the board we can raise a different exception like |
Beta Was this translation helpful? Give feedback.
-
Or we can do something like: |
Beta Was this translation helpful? Give feedback.
-
@paneru-rajan i haven't tried the initial prompt yet. is that a consistent repro? If it's inconsistent -> i'm assuming an error makes sense to allow some retry logic to occurr if it's consistently returning an empty response -> what would you want to happen here? @hi019 @paneru-rajan |
Beta Was this translation helpful? Give feedback.
-
@paneru-rajan we don't raise a generic anthropic error. We raise an 'APIError' -> we map our exceptions to the openai exceptions. https://platform.openai.com/docs/guides/error-codes/api-errors This maps to a '500' error which indicates a server-side error, and is an acceptable error code for retrying - Line 7629 in fa47ce4 In this case - it seems like an error is ok but the error code is not a retry-able exception? |
Beta Was this translation helpful? Give feedback.
-
I am also facing the issue where claude start giving blank content with assistant role. This usually happen when we feed tool output in chat history. Any idea how to fix it? |
Beta Was this translation helpful? Give feedback.
-
What happened?
Sometimes Anthropic returns an empty
content
array. LiteLLM doesn't expect this and throws an error:Repro:
Relevant log output
No response
Twitter / LinkedIn details
No response
Beta Was this translation helpful? Give feedback.
All reactions