Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Stacktrace on letta server #2052

Open
distributev opened this issue Nov 17, 2024 · 2 comments
Open

Stacktrace on letta server #2052

distributev opened this issue Nov 17, 2024 · 2 comments
Labels

Comments

@distributev
Copy link

I have configured letta with 2 agents both on Openrouter gpt4o mini and I have the correct OpenRouter API key.

When I do letta server I get a big stacktrace "Unauthorized" for an open AI url (why it tries to connect to open ai if I configured openrouter)?

INFO:     Started server process [176]
INFO:     Waiting for application startup.
C:\Projects\all-repos\src\distributev\ai-crew\bkend-core-python\letta\.venv-letta\Lib\site-packages\pydantic\json_schema.py:2191: PydanticJsonSchemaWarning: Default value <property object at 0x000001E359124770> is not JSON serializable; excluding default from JSON schema [non-serializable-default]
  warnings.warn(message, PydanticJsonSchemaWarning)
INFO:     Application startup complete.
INFO:     Uvicorn running on http://localhost:8283 (Press CTRL+C to quit)
INFO:     ::1:56154 - "GET /v1/agents/ HTTP/1.1" 200 OK
INFO:     ::1:56157 - "GET /v1/blocks/?label=human HTTP/1.1" 200 OK
INFO:     ::1:56156 - "GET /v1/blocks/?label=persona HTTP/1.1" 200 OK
INFO:     ::1:56158 - "GET /v1/tools/ HTTP/1.1" 200 OK
INFO:     ::1:56155 - "GET /v1/models/ HTTP/1.1" 500 Internal Server Error
ERROR:    Exception in ASGI application
Traceback (most recent call last):
  File "C:\Projects\all-repos\src\distributev\ai-crew\bkend-core-python\letta\.venv-letta\Lib\site-packages\uvicorn\protocols\http\httptools_impl.py", line 426, in run_asgi
    result = await app(  # type: ignore[func-returns-value]
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Projects\all-repos\src\distributev\ai-crew\bkend-core-python\letta\.venv-letta\Lib\site-packages\uvicorn\middleware\proxy_headers.py", line 84, in __call__
    return await self.app(scope, receive, send)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Projects\all-repos\src\distributev\ai-crew\bkend-core-python\letta\.venv-letta\Lib\site-packages\fastapi\applications.py", line 1106, in __call__
    await super().__call__(scope, receive, send)
  File "C:\Projects\all-repos\src\distributev\ai-crew\bkend-core-python\letta\.venv-letta\Lib\site-packages\starlette\applications.py", line 122, in __call__
    await self.middleware_stack(scope, receive, send)
  File "C:\Projects\all-repos\src\distributev\ai-crew\bkend-core-python\letta\.venv-letta\Lib\site-packages\starlette\middleware\errors.py", line 184, in __call__
    raise exc
  File "C:\Projects\all-repos\src\distributev\ai-crew\bkend-core-python\letta\.venv-letta\Lib\site-packages\starlette\middleware\errors.py", line 162, in __call__
    await self.app(scope, receive, _send)
  File "C:\Projects\all-repos\src\distributev\ai-crew\bkend-core-python\letta\.venv-letta\Lib\site-packages\starlette\middleware\cors.py", line 83, in __call__
    await self.app(scope, receive, send)
  File "C:\Projects\all-repos\src\distributev\ai-crew\bkend-core-python\letta\.venv-letta\Lib\site-packages\starlette\middleware\exceptions.py", line 79, in __call__
    raise exc
  File "C:\Projects\all-repos\src\distributev\ai-crew\bkend-core-python\letta\.venv-letta\Lib\site-packages\starlette\middleware\exceptions.py", line 68, in __call__
    await self.app(scope, receive, sender)
  File "C:\Projects\all-repos\src\distributev\ai-crew\bkend-core-python\letta\.venv-letta\Lib\site-packages\fastapi\middleware\asyncexitstack.py", line 20, in __call__
    raise e
  File "C:\Projects\all-repos\src\distributev\ai-crew\bkend-core-python\letta\.venv-letta\Lib\site-packages\fastapi\middleware\asyncexitstack.py", line 17, in __call__
    await self.app(scope, receive, send)
  File "C:\Projects\all-repos\src\distributev\ai-crew\bkend-core-python\letta\.venv-letta\Lib\site-packages\starlette\routing.py", line 718, in __call__
    await route.handle(scope, receive, send)
  File "C:\Projects\all-repos\src\distributev\ai-crew\bkend-core-python\letta\.venv-letta\Lib\site-packages\starlette\routing.py", line 276, in handle
    await self.app(scope, receive, send)
  File "C:\Projects\all-repos\src\distributev\ai-crew\bkend-core-python\letta\.venv-letta\Lib\site-packages\starlette\routing.py", line 66, in app
    response = await func(request)
               ^^^^^^^^^^^^^^^^^^^
  File "C:\Projects\all-repos\src\distributev\ai-crew\bkend-core-python\letta\.venv-letta\Lib\site-packages\fastapi\routing.py", line 274, in app
    raw_response = await run_endpoint_function(
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Projects\all-repos\src\distributev\ai-crew\bkend-core-python\letta\.venv-letta\Lib\site-packages\fastapi\routing.py", line 193, in run_endpoint_function
    return await run_in_threadpool(dependant.call, **values)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Projects\all-repos\src\distributev\ai-crew\bkend-core-python\letta\.venv-letta\Lib\site-packages\starlette\concurrency.py", line 41, in run_in_threadpool
    return await anyio.to_thread.run_sync(func, *args)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Projects\all-repos\src\distributev\ai-crew\bkend-core-python\letta\.venv-letta\Lib\site-packages\anyio\to_thread.py", line 33, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Projects\all-repos\src\distributev\ai-crew\bkend-core-python\letta\.venv-letta\Lib\site-packages\anyio\_backends\_asyncio.py", line 877, in run_sync_in_worker_thread
    return await future
           ^^^^^^^^^^^^
  File "C:\Projects\all-repos\src\distributev\ai-crew\bkend-core-python\letta\.venv-letta\Lib\site-packages\anyio\_backends\_asyncio.py", line 807, in run
    result = context.run(func, *args)
             ^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Projects\all-repos\src\distributev\ai-crew\bkend-core-python\letta\letta\server\rest_api\routers\v1\llms.py", line 20, in list_llm_backends
    models = server.list_llm_models()
             ^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Projects\all-repos\src\distributev\ai-crew\bkend-core-python\letta\letta\server\server.py", line 1853, in list_llm_models
    llm_models.extend(provider.list_llm_models())
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Projects\all-repos\src\distributev\ai-crew\bkend-core-python\letta\letta\providers.py", line 68, in list_llm_models
    response = openai_get_model_list(self.base_url, api_key=self.api_key, extra_params=extra_params)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Projects\all-repos\src\distributev\ai-crew\bkend-core-python\letta\letta\llm_api\openai.py", line 89, in openai_get_model_list
    raise http_err
  File "C:\Projects\all-repos\src\distributev\ai-crew\bkend-core-python\letta\letta\llm_api\openai.py", line 77, in openai_get_model_list
    response.raise_for_status()  # Raises HTTPError for 4XX/5XX status
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Projects\all-repos\src\distributev\ai-crew\bkend-core-python\letta\.venv-letta\Lib\site-packages\requests\models.py", line 1024, in raise_for_status
    raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 401 Client Error: Unauthorized for url: https://api.openai.com/v1/models
INFO:     ::1:56158 - "GET /v1/models/ HTTP/1.1" 500 Internal Server Error
ERROR:    Exception in ASGI application
Traceback (most recent call last):
  File "C:\Projects\all-repos\src\distributev\ai-crew\bkend-core-python\letta\.venv-letta\Lib\site-packages\uvicorn\protocols\http\httptools_impl.py", line 426, in run_asgi
    result = await app(  # type: ignore[func-returns-value]
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Projects\all-repos\src\distributev\ai-crew\bkend-core-python\letta\.venv-letta\Lib\site-packages\uvicorn\middleware\proxy_headers.py", line 84, in __call__
    return await self.app(scope, receive, send)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Projects\all-repos\src\distributev\ai-crew\bkend-core-python\letta\.venv-letta\Lib\site-packages\fastapi\applications.py", line 1106, in __call__
    await super().__call__(scope, receive, send)
  File "C:\Projects\all-repos\src\distributev\ai-crew\bkend-core-python\letta\.venv-letta\Lib\site-packages\starlette\applications.py", line 122, in __call__
    await self.middleware_stack(scope, receive, send)
  File "C:\Projects\all-repos\src\distributev\ai-crew\bkend-core-python\letta\.venv-letta\Lib\site-packages\starlette\middleware\errors.py", line 184, in __call__
    raise exc
  File "C:\Projects\all-repos\src\distributev\ai-crew\bkend-core-python\letta\.venv-letta\Lib\site-packages\starlette\middleware\errors.py", line 162, in __call__
    await self.app(scope, receive, _send)
  File "C:\Projects\all-repos\src\distributev\ai-crew\bkend-core-python\letta\.venv-letta\Lib\site-packages\starlette\middleware\cors.py", line 83, in __call__
    await self.app(scope, receive, send)
  File "C:\Projects\all-repos\src\distributev\ai-crew\bkend-core-python\letta\.venv-letta\Lib\site-packages\starlette\middleware\exceptions.py", line 79, in __call__
    raise exc
  File "C:\Projects\all-repos\src\distributev\ai-crew\bkend-core-python\letta\.venv-letta\Lib\site-packages\starlette\middleware\exceptions.py", line 68, in __call__
    await self.app(scope, receive, sender)
  File "C:\Projects\all-repos\src\distributev\ai-crew\bkend-core-python\letta\.venv-letta\Lib\site-packages\fastapi\middleware\asyncexitstack.py", line 20, in __call__
    raise e
  File "C:\Projects\all-repos\src\distributev\ai-crew\bkend-core-python\letta\.venv-letta\Lib\site-packages\fastapi\middleware\asyncexitstack.py", line 17, in __call__
    await self.app(scope, receive, send)
  File "C:\Projects\all-repos\src\distributev\ai-crew\bkend-core-python\letta\.venv-letta\Lib\site-packages\starlette\routing.py", line 718, in __call__
    await route.handle(scope, receive, send)
  File "C:\Projects\all-repos\src\distributev\ai-crew\bkend-core-python\letta\.venv-letta\Lib\site-packages\starlette\routing.py", line 276, in handle
    await self.app(scope, receive, send)
  File "C:\Projects\all-repos\src\distributev\ai-crew\bkend-core-python\letta\.venv-letta\Lib\site-packages\starlette\routing.py", line 66, in app
    response = await func(request)
               ^^^^^^^^^^^^^^^^^^^
  File "C:\Projects\all-repos\src\distributev\ai-crew\bkend-core-python\letta\.venv-letta\Lib\site-packages\fastapi\routing.py", line 274, in app
    raw_response = await run_endpoint_function(
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Projects\all-repos\src\distributev\ai-crew\bkend-core-python\letta\.venv-letta\Lib\site-packages\fastapi\routing.py", line 193, in run_endpoint_function
    return await run_in_threadpool(dependant.call, **values)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Projects\all-repos\src\distributev\ai-crew\bkend-core-python\letta\.venv-letta\Lib\site-packages\starlette\concurrency.py", line 41, in run_in_threadpool
    return await anyio.to_thread.run_sync(func, *args)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Projects\all-repos\src\distributev\ai-crew\bkend-core-python\letta\.venv-letta\Lib\site-packages\anyio\to_thread.py", line 33, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Projects\all-repos\src\distributev\ai-crew\bkend-core-python\letta\.venv-letta\Lib\site-packages\anyio\_backends\_asyncio.py", line 877, in run_sync_in_worker_thread
    return await future
           ^^^^^^^^^^^^
  File "C:\Projects\all-repos\src\distributev\ai-crew\bkend-core-python\letta\.venv-letta\Lib\site-packages\anyio\_backends\_asyncio.py", line 807, in run
    result = context.run(func, *args)
             ^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Projects\all-repos\src\distributev\ai-crew\bkend-core-python\letta\letta\server\rest_api\routers\v1\llms.py", line 20, in list_llm_backends
    models = server.list_llm_models()
             ^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Projects\all-repos\src\distributev\ai-crew\bkend-core-python\letta\letta\server\server.py", line 1853, in list_llm_models
    llm_models.extend(provider.list_llm_models())
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Projects\all-repos\src\distributev\ai-crew\bkend-core-python\letta\letta\providers.py", line 68, in list_llm_models
    response = openai_get_model_list(self.base_url, api_key=self.api_key, extra_params=extra_params)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Projects\all-repos\src\distributev\ai-crew\bkend-core-python\letta\letta\llm_api\openai.py", line 89, in openai_get_model_list
    raise http_err
  File "C:\Projects\all-repos\src\distributev\ai-crew\bkend-core-python\letta\letta\llm_api\openai.py", line 77, in openai_get_model_list
    response.raise_for_status()  # Raises HTTPError for 4XX/5XX status
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Projects\all-repos\src\distributev\ai-crew\bkend-core-python\letta\.venv-letta\Lib\site-packages\requests\models.py", line 1024, in raise_for_status
    raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 401 Client Error: Unauthorized for url: https://api.openai.com/v1/models
INFO:     ::1:56156 - "GET /v1/models/ HTTP/1.1" 500 Internal Server Error
ERROR:    Exception in ASGI application
Traceback (most recent call last):
  File "C:\Projects\all-repos\src\distributev\ai-crew\bkend-core-python\letta\.venv-letta\Lib\site-packages\uvicorn\protocols\http\httptools_impl.py", line 426, in run_asgi
    result = await app(  # type: ignore[func-returns-value]
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Projects\all-repos\src\distributev\ai-crew\bkend-core-python\letta\.venv-letta\Lib\site-packages\uvicorn\middleware\proxy_headers.py", line 84, in __call__
    return await self.app(scope, receive, send)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Projects\all-repos\src\distributev\ai-crew\bkend-core-python\letta\.venv-letta\Lib\site-packages\fastapi\applications.py", line 1106, in __call__
    await super().__call__(scope, receive, send)
  File "C:\Projects\all-repos\src\distributev\ai-crew\bkend-core-python\letta\.venv-letta\Lib\site-packages\starlette\applications.py", line 122, in __call__
    await self.middleware_stack(scope, receive, send)
  File "C:\Projects\all-repos\src\distributev\ai-crew\bkend-core-python\letta\.venv-letta\Lib\site-packages\starlette\middleware\errors.py", line 184, in __call__
    raise exc
  File "C:\Projects\all-repos\src\distributev\ai-crew\bkend-core-python\letta\.venv-letta\Lib\site-packages\starlette\middleware\errors.py", line 162, in __call__
    await self.app(scope, receive, _send)
  File "C:\Projects\all-repos\src\distributev\ai-crew\bkend-core-python\letta\.venv-letta\Lib\site-packages\starlette\middleware\cors.py", line 83, in __call__
    await self.app(scope, receive, send)
  File "C:\Projects\all-repos\src\distributev\ai-crew\bkend-core-python\letta\.venv-letta\Lib\site-packages\starlette\middleware\exceptions.py", line 79, in __call__
    raise exc
  File "C:\Projects\all-repos\src\distributev\ai-crew\bkend-core-python\letta\.venv-letta\Lib\site-packages\starlette\middleware\exceptions.py", line 68, in __call__
    await self.app(scope, receive, sender)
  File "C:\Projects\all-repos\src\distributev\ai-crew\bkend-core-python\letta\.venv-letta\Lib\site-packages\fastapi\middleware\asyncexitstack.py", line 20, in __call__
    raise e
  File "C:\Projects\all-repos\src\distributev\ai-crew\bkend-core-python\letta\.venv-letta\Lib\site-packages\fastapi\middleware\asyncexitstack.py", line 17, in __call__
    await self.app(scope, receive, send)
  File "C:\Projects\all-repos\src\distributev\ai-crew\bkend-core-python\letta\.venv-letta\Lib\site-packages\starlette\routing.py", line 718, in __call__
    await route.handle(scope, receive, send)
  File "C:\Projects\all-repos\src\distributev\ai-crew\bkend-core-python\letta\.venv-letta\Lib\site-packages\starlette\routing.py", line 276, in handle
    await self.app(scope, receive, send)
  File "C:\Projects\all-repos\src\distributev\ai-crew\bkend-core-python\letta\.venv-letta\Lib\site-packages\starlette\routing.py", line 66, in app
    response = await func(request)
               ^^^^^^^^^^^^^^^^^^^
  File "C:\Projects\all-repos\src\distributev\ai-crew\bkend-core-python\letta\.venv-letta\Lib\site-packages\fastapi\routing.py", line 274, in app
    raw_response = await run_endpoint_function(
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Projects\all-repos\src\distributev\ai-crew\bkend-core-python\letta\.venv-letta\Lib\site-packages\fastapi\routing.py", line 193, in run_endpoint_function
    return await run_in_threadpool(dependant.call, **values)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Projects\all-repos\src\distributev\ai-crew\bkend-core-python\letta\.venv-letta\Lib\site-packages\starlette\concurrency.py", line 41, in run_in_threadpool
    return await anyio.to_thread.run_sync(func, *args)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Projects\all-repos\src\distributev\ai-crew\bkend-core-python\letta\.venv-letta\Lib\site-packages\anyio\to_thread.py", line 33, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Projects\all-repos\src\distributev\ai-crew\bkend-core-python\letta\.venv-letta\Lib\site-packages\anyio\_backends\_asyncio.py", line 877, in run_sync_in_worker_thread
    return await future
           ^^^^^^^^^^^^
  File "C:\Projects\all-repos\src\distributev\ai-crew\bkend-core-python\letta\.venv-letta\Lib\site-packages\anyio\_backends\_asyncio.py", line 807, in run
    result = context.run(func, *args)
             ^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Projects\all-repos\src\distributev\ai-crew\bkend-core-python\letta\letta\server\rest_api\routers\v1\llms.py", line 20, in list_llm_backends
    models = server.list_llm_models()
             ^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Projects\all-repos\src\distributev\ai-crew\bkend-core-python\letta\letta\server\server.py", line 1853, in list_llm_models
    llm_models.extend(provider.list_llm_models())
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Projects\all-repos\src\distributev\ai-crew\bkend-core-python\letta\letta\providers.py", line 68, in list_llm_models
    response = openai_get_model_list(self.base_url, api_key=self.api_key, extra_params=extra_params)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Projects\all-repos\src\distributev\ai-crew\bkend-core-python\letta\letta\llm_api\openai.py", line 89, in openai_get_model_list
    raise http_err
  File "C:\Projects\all-repos\src\distributev\ai-crew\bkend-core-python\letta\letta\llm_api\openai.py", line 77, in openai_get_model_list
    response.raise_for_status()  # Raises HTTPError for 4XX/5XX status
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Projects\all-repos\src\distributev\ai-crew\bkend-core-python\letta\.venv-letta\Lib\site-packages\requests\models.py", line 1024, in raise_for_status
    raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 401 Client Error: Unauthorized for url: https://api.openai.com/v1/models
INFO:     ::1:56164 - "GET /v1/models/ HTTP/1.1" 500 Internal Server Error
ERROR:    Exception in ASGI application
Traceback (most recent call last):
  File "C:\Projects\all-repos\src\distributev\ai-crew\bkend-core-python\letta\.venv-letta\Lib\site-packages\uvicorn\protocols\http\httptools_impl.py", line 426, in run_asgi
    result = await app(  # type: ignore[func-returns-value]
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Projects\all-repos\src\distributev\ai-crew\bkend-core-python\letta\.venv-letta\Lib\site-packages\uvicorn\middleware\proxy_headers.py", line 84, in __call__
    return await self.app(scope, receive, send)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Projects\all-repos\src\distributev\ai-crew\bkend-core-python\letta\.venv-letta\Lib\site-packages\fastapi\applications.py", line 1106, in __call__
    await super().__call__(scope, receive, send)
  File "C:\Projects\all-repos\src\distributev\ai-crew\bkend-core-python\letta\.venv-letta\Lib\site-packages\starlette\applications.py", line 122, in __call__
    await self.middleware_stack(scope, receive, send)
  File "C:\Projects\all-repos\src\distributev\ai-crew\bkend-core-python\letta\.venv-letta\Lib\site-packages\starlette\middleware\errors.py", line 184, in __call__
    raise exc
  File "C:\Projects\all-repos\src\distributev\ai-crew\bkend-core-python\letta\.venv-letta\Lib\site-packages\starlette\middleware\errors.py", line 162, in __call__
    await self.app(scope, receive, _send)
  File "C:\Projects\all-repos\src\distributev\ai-crew\bkend-core-python\letta\.venv-letta\Lib\site-packages\starlette\middleware\cors.py", line 83, in __call__
    await self.app(scope, receive, send)
  File "C:\Projects\all-repos\src\distributev\ai-crew\bkend-core-python\letta\.venv-letta\Lib\site-packages\starlette\middleware\exceptions.py", line 79, in __call__
    raise exc
  File "C:\Projects\all-repos\src\distributev\ai-crew\bkend-core-python\letta\.venv-letta\Lib\site-packages\starlette\middleware\exceptions.py", line 68, in __call__
    await self.app(scope, receive, sender)
  File "C:\Projects\all-repos\src\distributev\ai-crew\bkend-core-python\letta\.venv-letta\Lib\site-packages\fastapi\middleware\asyncexitstack.py", line 20, in __call__
    raise e
  File "C:\Projects\all-repos\src\distributev\ai-crew\bkend-core-python\letta\.venv-letta\Lib\site-packages\fastapi\middleware\asyncexitstack.py", line 17, in __call__
    await self.app(scope, receive, send)
  File "C:\Projects\all-repos\src\distributev\ai-crew\bkend-core-python\letta\.venv-letta\Lib\site-packages\starlette\routing.py", line 718, in __call__
    await route.handle(scope, receive, send)
  File "C:\Projects\all-repos\src\distributev\ai-crew\bkend-core-python\letta\.venv-letta\Lib\site-packages\starlette\routing.py", line 276, in handle
    await self.app(scope, receive, send)
  File "C:\Projects\all-repos\src\distributev\ai-crew\bkend-core-python\letta\.venv-letta\Lib\site-packages\starlette\routing.py", line 66, in app
    response = await func(request)
               ^^^^^^^^^^^^^^^^^^^
  File "C:\Projects\all-repos\src\distributev\ai-crew\bkend-core-python\letta\.venv-letta\Lib\site-packages\fastapi\routing.py", line 274, in app
    raw_response = await run_endpoint_function(
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Projects\all-repos\src\distributev\ai-crew\bkend-core-python\letta\.venv-letta\Lib\site-packages\fastapi\routing.py", line 193, in run_endpoint_function
    return await run_in_threadpool(dependant.call, **values)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Projects\all-repos\src\distributev\ai-crew\bkend-core-python\letta\.venv-letta\Lib\site-packages\starlette\concurrency.py", line 41, in run_in_threadpool
    return await anyio.to_thread.run_sync(func, *args)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Projects\all-repos\src\distributev\ai-crew\bkend-core-python\letta\.venv-letta\Lib\site-packages\anyio\to_thread.py", line 33, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Projects\all-repos\src\distributev\ai-crew\bkend-core-python\letta\.venv-letta\Lib\site-packages\anyio\_backends\_asyncio.py", line 877, in run_sync_in_worker_thread
    return await future
           ^^^^^^^^^^^^
  File "C:\Projects\all-repos\src\distributev\ai-crew\bkend-core-python\letta\.venv-letta\Lib\site-packages\anyio\_backends\_asyncio.py", line 807, in run
    result = context.run(func, *args)
             ^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Projects\all-repos\src\distributev\ai-crew\bkend-core-python\letta\letta\server\rest_api\routers\v1\llms.py", line 20, in list_llm_backends
    models = server.list_llm_models()
             ^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Projects\all-repos\src\distributev\ai-crew\bkend-core-python\letta\letta\server\server.py", line 1853, in list_llm_models
    llm_models.extend(provider.list_llm_models())
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Projects\all-repos\src\distributev\ai-crew\bkend-core-python\letta\letta\providers.py", line 68, in list_llm_models
    response = openai_get_model_list(self.base_url, api_key=self.api_key, extra_params=extra_params)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Projects\all-repos\src\distributev\ai-crew\bkend-core-python\letta\letta\llm_api\openai.py", line 89, in openai_get_model_list
    raise http_err
  File "C:\Projects\all-repos\src\distributev\ai-crew\bkend-core-python\letta\letta\llm_api\openai.py", line 77, in openai_get_model_list
    response.raise_for_status()  # Raises HTTPError for 4XX/5XX status
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Projects\all-repos\src\distributev\ai-crew\bkend-core-python\letta\.venv-letta\Lib\site-packages\requests\models.py", line 1024, in raise_for_status
    raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 401 Client Error: Unauthorized for url: https://api.openai.com/v1/models
@distributev
Copy link
Author

Why it tries to list openai models if I nowere configured I want openai chatgpt or anything else from openai

  File "C:\Projects\all-repos\src\distributev\ai-crew\bkend-core-python\letta\.venv-letta\Lib\site-packages\anyio\_backends\_asyncio.py", line 807, in run
    result = context.run(func, *args)
             ^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Projects\all-repos\src\distributev\ai-crew\bkend-core-python\letta\letta\server\rest_api\routers\v1\llms.py", line 20, in list_llm_backends
    models = server.list_llm_models()
             ^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Projects\all-repos\src\distributev\ai-crew\bkend-core-python\letta\letta\server\server.py", line 1853, in list_llm_models
    llm_models.extend(provider.list_llm_models())
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Projects\all-repos\src\distributev\ai-crew\bkend-core-python\letta\letta\providers.py", line 68, in list_llm_models
    response = openai_get_model_list(self.base_url, api_key=self.api_key, extra_params=extra_params)

Copy link

This issue is stale because it has been open for 30 days with no activity.

@github-actions github-actions bot added the stale label Dec 18, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

1 participant